Tuesday, December 28, 2021

The new nanotechnology needs miniaturized microelectronics.

  

 The new nanotechnology needs miniaturized microelectronics. 




The bottleneck in the interaction with nanomachines and quantum computers is the interface. Computer keyboards and screens are using binary systems. And that means the development of binary systems is also continuing. Artificial intelligence is the best tool for controlling nanomachine swarms. 

But those systems require powerful computers. And another thing is that the nanomachines are also required internal and well-protected computers. Screens and keyboards are needed to send commands to nanomachines through the quantum system. 

Development of the new quantum systems is not enough. If we want to use quantum computers things like screens and keyboards are the bottleneck for interaction between quantum computers and humans. When we are developing new quantum systems we must load information to the quantum systems. And for that kind of duties is needed to develop more and more powerful binary computers. 


The quantum systems are large-size and if we want to use them to control things like nanomachines. We must realize that the nanomachine can use WiFi to communicate with the quantum systems. But the data that the nanomachine sends and receives must decode to the quantum system. 


The quantum computer can interact with nanorobots by using the internet. But the problem is that those systems use so different architecture that there must be decoders and coders between nanomachine and quantum systems. Also if we are thinking of things like submarines that can swim in our blood vessels those systems require control software and so small computers that they are fitting in the small body. 

Intelligent technology means that the system requires less energy in the physical mechanics. But otherwise, if we want to make the intelligent parts in microchips. Controlling that kind of system requires more powerful computers inside the machine than the dummy technology. But the problem is that the power source of those systems must be extremely small. 

Or electricity must transfer to those miniaturized systems by using radio waves. The antenna would induct that energy into the systems of the miniature robots. But the problem with miniaturized computers can overheat. 

Another version is to use the miniaturized quantum-radio system. Which is using superpositioned sand entangled particles to control the "radios" that are used to aim the submarine or some other nanorobot. 

The superpositioned and entangled particles are also protecting data from turbulence. If there are some unpredicted errors the nanomachine loses its accuracy. Those nanomachines are the next-generation tools for the surgeon and they can open vessels in the human body. 

When the quantum entanglement hits to receiver it inducts power into it. The use of quantum entanglement guarantees high accuracy in data transportation. Also, energy is always at the right level. If the energy level is too high, that means the nanocomputer will overheat. 



https://newatlas.com/physics/new-distance-record-quantum-entanglement-light-matter/


https://scitechdaily.com/fundamental-discovery-used-to-turn-nanotube-into-tiny-transistor-25000x-smaller-than-width-of-a-human-hair/


https://scitechdaily.com/revolutionary-new-intelligent-transistor-developed/


https://en.wikipedia.org/wiki/Nanosubmarine


Image:http://news.bbc.co.uk/2/hi/science/nature/2135779.stm


https://networkedinternet.blogspot.com/


Tuesday, December 21, 2021

The singularity of consciousness.

 The singularity of consciousness. 




Quantum teleportation makes it possible to transmit data between neurons and computers.


The image above the text could demonstrate the idea of the straight-operating BCI(Brain-Computer Interface) The BCI can also mean (Brain-Controlled Interface), but they are almost the same thing. Normally the BCI needs a microchip. But there is the possibility that the brain could control the microprocessors straight by using the superpositioned and entangled magnetic fields. 

So that kind of system might not require any kind of microchips for interacting with computers. And the only thing that the computer needs are a radio receiver-transmitter that can resonate with neuro-electricity. And the system must have high enough accuracy that it can decode and code the EEG signals straight from the human body. 

Artificial intelligence and superconducting technology can make this kind of system possible to make in real life. The AI can separate the wanted signals from the white noise. And that thing makes it possible to connect the BCI straight with the nervous system. 

The same systems used with the neuroprosthetics can use to move virtual objects like virtual hands on the screens. And those virtual hands can operate with virtual control tables connected with robots or the central computer of the jet fighter. 


x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x



The quantum entanglement can make TRV (Technical Remote View) possible. 


Tardigrade is quantum entangled. In the quantum world, the quantum entanglement might consist of the entirety. Or only small parts of the organisms or other entireties can be entangled. The thing is that also if only a couple of ions or electrons are entangled. That can affect the entire organism or quantum entirety. 

The entangled electrons and ions can cause quantum annealing to the quantum system. And in the case of living creatures, that kind of effect can cause electromagnetic resonance to the nervous systems of the organisms. That kind of case might have a big effect on the entirety. 

The thing is that part of those organisms might be quantum entangled. But the entirety of tardigrades is not entangled. When somebody is talking about entangled organisms they mean to create conditions. Those organisms could share data. 


A simple way to teleport is to connect VR (Virtual Reality)-set to the quadcopters that are equipped with microphones and CCD cameras. 


So we might think teleportation is the case where our senses are connected with the remote systems. The simplest way to make that thing is to connect the VR (Virtual Reality)system with a drone with a microphone and cameras. Those sensors can transmit data to the controller like the controller be physically at the place. 

Their sensors collect with other organisms or machines connected in the same network segment. Or simply saying people who have the same frequency  In normal cases, those people talk bout connecting the brain waves of the organisms as one entirety. 

In that case, the EEG curves of the organisms would synchronize to the same frequency. And that thing can create by using the superpositioned and entangled electromagnetic fields. In that case, the data travels between superpositioned and entangled magnetic fields. And that allows users of the system can share their senses and other things with everybody. That is connected with this network. 

Tardigrade is the first multi-cell organism that is quantum entangled and lives after that. The tardigrades were frozen. And then the quantum entanglement created between them. After that those small animals wake up. The thing is that the entanglement of the tardigrades might not seem very impressive. But it's one of the greatest advantages in the history of quantum systems.


When we are talking about quantum teleportation we mean the situation where the oscillation of atomic or subatomic particles would synchronize. And that thing makes those things act like they would be the same particle. 


Sometimes we are forgetting that also electromagnetic fields can superpositioned and entangled. And that thing opens new visions for controlling quantum computers. If the electromagnetic fields of the nervous system and quantum computer are connected and synchronized. That allows that computers can share data between living neurons. And humans can control computers straight with EEG waves. 

The TRV (Technical Remote View) is one version of the (BCI (Brain-Computer Interface)). The technical description of the TRV is that the camera system is connected with Virtual Reality. That system can locate in the bag of the bug and the operator can see the things that happen in some room like really being inside that space. 

Sometimes are told about the things like eavesdropping tools that are installed in teeth implants. And today those miniaturized systems can send GPS data and biometric information even to satellites. Also, the new spy cameras that are the size of a salt grain can be hidden in contact lenses. And along with nanotechnical microchips, that system can transmit data to the Internet by using the mobile telephone as the relay station. 



The origin of the BCI is in neuroprosthetics as I wrote before


The origin of the BCI is in neuroprosthetics. The EEG-based control systems. That can control the prosthesis. Those microchips can install surgically to the head of the person. Or they can locate in the helmet, and that system can revolutionize robotics and computing. 

Same way as the EEG waves can control the prosthesis they can control human-looking robots, aircraft, and space systems. And those systems are under development by governmental and private companies. 

The system can use a virtual workspace where the brainwaves are moving virtual hands which are using virtual consoles. And that system means that the computer of aircraft can connect straight with the computer of the aircraft or man-looking robot. 

The BCI is the system created for controlling the intelligent prosthesis. The same microchips can connect humans to the Internet or control robots by using their EEG. TRV is the remote radio-or quantum-based system that can communicate with human or animal brains from long distances. 

The technical remote view can create by using the superpositioned and entangled magnetic fields. The idea is that the radio wave will send the data straight to neurons benefiting magnesite crystals. 

Another magnetic field would form by using the radio transmitter which sends radio waves by using a frequency that is making the targeted neurons oscillate. This kind of system can turn any organism into a robot or read neuro electricity and the distance that this system can make those things depends on how sensitive the receiver is. 


And another thing that this system requires is the coder-decoder that can turn the EEG into the code that can introduce on the computer screen. And of course opposite way. 


The ability to transmit data between neurons and computers gives a possibility to create half-organic and half-quantum computers. Those are more powerful than any existing computer system including quantum computers.  And they would be even more powerful than human brains. 

Those systems are only one solution that this kind of TRV system can use. Another thing is that by using the TRV-system is the intelligence work. The senses of humans or animals can connect with computers. And those systems can also use to transmit data between human brains. 


https://www.cnet.com/news/no-tardigrades-have-not-been-quantum-entangled-with-a-qubit/


https://futurism.com/in-five-years-your-smartphone-could-be-reading-your-mind


https://futurism.com/researchers-have-linked-a-human-brain-to-the-internet-for-the-first-time-ever


https://www.iflscience.com/physics/tardigrade-might-be-first-animal-to-be-quantum-entangled-and-live/


https://www.livescience.com/tardigrade-quantum-entangled-experiment


https://mspoweruser.com/microsoft-is-working-on-a-brain-computer-interface-but-for-a-good-reason/


https://www.nextbigfuture.com/2021/12/camera-the-size-of-a-grain-of-sand.html


https://www.sciencealert.com/physicists-claim-they-ve-entangled-a-tardigrade-with-qubits-but-did-they


https://www.sciencedirect.com/science/article/abs/pii/S0925231220314223


https://en.wikipedia.org/wiki/Brain%E2%80%93computer_interface


https://en.wikipedia.org/wiki/Neuroprosthetics


Image:https://mspoweruser.com/microsoft-is-working-on-a-brain-computer-interface-but-for-a-good-reason/

Saturday, December 18, 2021

For the first time probe has touched the Sun.

 For the first time probe has touched the Sun. 




NASA Parker probe has been dive in the Sun. That probe used interesting technology that helped to keep it cool. There could be possible to make the probe dive even deeper by using a magnetic field that pushes the plasma away from the probe. That would decrease the interaction between the probe and the plasma that comes from the Sun. 

The reason why Parker was sent to its mission. Is that it should collect data from the solar wind. The solar wind is high-energetic plasma that can destroy spacecraft. But there is another reason why close contact with the Sun is needed. The mission of probes like Parker is to collect data for the fusion tests. The power source of the Sun is fusion reaction. And the close contact with the Sun is the thing that can help to model the conditions that are making fusion possible. 

In some visions, the probes like Parker can collect antimatter from the solar atmosphere. The probe would travel to the sun.  Then that antimatter collector dives deep into the corona. Then it could open the solar sail and solar wind can push the probe back to Earth. 

Antimatter production is extremely expensive. And the probe could create it by putting the beta particles of the solar wind impact with gold leaf. Antimatter can use in fusion reactors to start the fusion reaction. 





The antimatter is creating an extremely high energy load. And that energy can use for starting and maintaining fusion reactions. The antimatter can make annihilate with material around the plasma-ring at Tokamak-type reactors. And that energy dose can start a fusion reaction. And it can raise the temperature after certain periods. The use of antimatter is to boost the fusion reaction in the fusion reactors. 

When the fusion reactor will start to turn too cold small dose of antimatter can turn the temperature to the needed level. The problem with the fusion rector is that the needed temperature is higher than in the Sun's nucleus. In the fusion reactors, the pressure of the nucleus of the sun must compensate by rising temperature to so high level. That the heat replaces missing pressure.  

The use of antimatter is one of the answers to how to create enough energy for starting self-maintaining fusion. But the high price would limit the use of antimatter for that purpose. So antimatter can collect from space. 


https://scitechdaily.com/parker-solar-probe-for-the-first-time-in-history-a-spacecraft-has-touched-the-sun/


https://en.wikipedia.org/wiki/Tokamak


Friday, December 17, 2021

ExoMars discovered water from the Mars Grand Canyon.

 

 ExoMars discovered water from the Mars Grand Canyon.



Image 1


"Artist’s impression of Mars Express. The background is based on an actual image of Mars taken by the spacecraft’s high resolution stereo camera. Credit: Spacecraft image: ESA/ATG medialab; Mars: ESA/DLR/FU Berlin" (ScitechDaily/ExoMars Discovers Hidden Water in Mars’ Grand Canyon – The Largest Canyon in the Solar System)


The finding of water is the big step for the plans that made for creating a manned Mars station in the future. And also that water can use as fuel in the sample returning mission. In the last case, that water will break electrolytically into oxygen and hydrogen. And the rocket that rises samples to orbiter can use those elements as fuel and oxygen. 

A similar mixture is used in the Saturn V rocket's second and third stages. The use of hydrogen and oxygen mean that the rocket is leaving only water behind it. The thing that causes pollution in the hydrogen-oxygen rocket is the electrolytic process. The creation of electricity for the electrolysis requires some kind of power system. 




"ESA’s ExoMars Trace Gas Orbiter (TGO) has discovered large amounts of water locked up within Mars’ extensive canyon system, Valles Marineris. Credit: From I. Mitrofanov et al. (2021)"(SciTechDaily/ExoMars Discovers Hidden Water in Mars’ Grand Canyon – The Largest Canyon in the Solar System)


Where do we need a Mars station?


The 3D printing system that can use materials that are taken from Mars would make the creation of the Mars station easier. It makes construction work more flexible and faster. So with the first thing that would land on Red Planet would be the chamber where is a 3D printer. That thing makes it possible that every nail and screw must not carry from the Earth to that planet. 

The first Mars station would be primitive. Its purpose is to give experiences about the structures on the surface of Mars. And at the later stage, the mission of those primitive stations is to work as restrooms for crews that are building more capable and larger stations on that planet. But if those construction works can made by using robots everything would be easier and faster.

Robots can operate non-stop, and they would not require humans all the time. So the first thing that lands on Mars could be the 3D printer system that creates robots and tools on that planet. That thing guarantees that there is no need for taking every spare part from the Earth. And also, broken robots can replace very easily. 

In that futuristic Mars -base will test things like closed nutrient circulation. Of course, those futuristic and still theoretical systems. Must have already been tested at the Moon. But on Mars, there is needed nuclear power or some kind of fuel cells that create electricity to the station in the nighttime. 


Image 3: Pinterest


Similar systems can use on the Moon. But Mars is farther away from Earth. If there are some kind of problems on the Moon station the rescue crew will reach it in five days. The Mars station must be more independent. The Mars station might play a big role for asteroid-belt missions and in far away in future, the Mars station might offer a resting place for missions to the outer solar system. 

If we would want to use electrolytic created oxygen in an artificial atmosphere of Mars or some other space station we must remember that pure oxygen is dangerous. If there is a fire at the distant station the pure oxygen causes the detonation. So in that kind of purpose in the gas. Must involve the inhibitor gas that is lowing the reactivity of the gas. 

The thing is that the vegetables and algae are offering a good way to create oxygen for the space station. Those things can use as a nutrient. Then the next challenge is that the Mars station is far away from Earth. The journey to that place will take 260 days. So that thing means that if there is some kind of leak the lost gas must replace. 

In some scenarios, the robots would make the station ready. And then the nutrient and algae are put for growing in that structure. The light and nutrients are dosing extremely carefully. The active carbon filtering systems will work along with biological systems. And that will make the atmosphere suitable, safe, and comfortable for the people who will take their stand in that still hypothetic Mars station. 

The structure can first fill by using carbon dioxide and then vegetables and algae will transform the atmosphere suitable for breathing while the humans are coming to make their stand. In those scenarios, the oxygen is separated by using vegetables and algae that live in the tank. The green algae can be used for killing anaerobic bacteria. That algae are also giving material for genetic research. 

The first Mars station would be primitive. Its purpose is to give experiences about the structures on the surface of Mars. And at the later stage, the mission of those primitive stations is to work as restrooms for crews that are building more capable and larger stations on that planet. But if those construction works can made by using robots everything would be easier and faster. Robots can operate non-stop and they would not require humans all the time. So the first thing that lands on Mars could be the 3D printer system that creates robots and tools on that planet. 

The 3D printing system that can use materials that are taken from Mars would make the creation of the Mars station easier. It makes construction work more flexible and faster. So with the first thing that would land on Red Planet would be the chamber where is a 3D printer. That thing makes it possible that every nail and screw must not carry from the Earth to that planet. 


https://scitechdaily.com/exomars-discovers-hidden-water-in-mars-grand-canyon-the-largest-canyon-in-the-solar-system/


Image 1: https://scitechdaily.com/exomars-discovers-hidden-water-in-mars-grand-canyon-the-largest-canyon-in-the-solar-system/


https://thoughtandmachines.blogspot.com/

Nature can give models for the next generation robots and microchips.

Nature can give models for the next generation robots and microchips.



"MIT researchers have pioneered a new fabrication technique that enables them to produce low-voltage, power-dense, high endurance soft actuators for an aerial microrobot. Credit: Courtesy of the researchers" (SciTechDaily/Giving Bug-Like Bots a Boost: New Artificial Muscles Improve the Performance of Flying Microrobots)


Muscle cells can rotate small-size generators. 


The MIT researchers are creating new. Low voltage artificial cells for improving the performance of the microrobots. (1)This is one way to improve the operational areas of small-size flying robots. There is another way to make the power source that uses biological components. The idea is that the muscle cells of flies would connect to cranks. 

And those systems are rotating miniaturized generators. This kind of robot can drink sugar-liquid. And the muscle cells can be natural but genetically transformed that they cannot split. Removing that ability guarantees. The cells are staying in the right form and right places. 

Or those cells can be cloned and grown in cell cultures. This kind of system can work by using the same nutrient with living organisms. And those miniature generators can deliver electricity for the computers and other systems that this miniature robot carries 


The fullerene chains can use as artificial DNA. In this text. The term "artificial DNA" means the carbon chain that acts like natural DNA or RNA. 


The artificial DNA can operate as a chemical computer program. The thing is that these kinds of things might not be the DNA as the natural form. The term "artificial DNA" can also mean the fullerene chains that act like real DNA or RNA molecules. 

The artificial DNA can act as the ROM (Read Only Memory) program for microchips and that thing can use for base-programming for miniature machines. In that case, the base movements of the system are stored in the  chemical computer program

Artificial DNA does not mean that the chemical computer program is like DNA. In the chemical computer program, researchers can replace normal molecules and base pairs of DNA can replace by using fullerene and the other elements. There are four bases in the normal DNA. 

"Two nitrogen-containing bases (or nucleotides) that pair together to form the structure of DNA. The four bases in DNA are adenine (A), cytosine (C), guanine (G), and thymine (T). These bases form specific pairs (A with T, and G with C). Base pair may also refer to the actual number of base pairs, such as 8 base pairs, in a sequence of nucleotides"(2).

So those nucleotides can replace by using some metals. That is connected to the fullerene. Or if we think sharply the fullerene itself can form the structure that can use as a data transporter. The fullerene balls would connect to the nanostructure. 

And they can form a similar nanostructure with DNA or RNA molecules. The fullerene balls form the ladder-looking structure like in natural DNA. And the length of the "stairs" would be the code that the molecule is giving. In the binary model, one fullerene ball could mean 0, and two fullerene balls might mean 1.

The artificial chemical computer programs can also operate as the qubit modes. In that case, one ball might mean zero. Two balls mean one, and three fullerene balls might mean 2, etc. That kind of chemical code can secure the microchips. 

There is the possibility that nanotechnology (3) can use to create real DNA molecules. This thing means that in the future. We will see more complicated and more effective solutions of the combination of nano- and biotechnology. Things like cloned muscle cells and neurons are the things that give the capacity of the small-size robot to more complicated operations than ever before.


(1)https://scitechdaily.com/giving-bug-like-bots-a-boost-new-artificial-muscles-improve-the-performance-of-flying-microrobots/


(2)https://www.cancer.gov/publications/dictionaries/genetics-dictionary/def/base-pair


(3)https://phys.org/news/2021-12-nanotechnology-genome-multiple-muscles-simultaneously.html


Image:https://scitechdaily.com/giving-bug-like-bots-a-boost-new-artificial-muscles-improve-the-performance-of-flying-microrobots/




Thursday, December 16, 2021

Technical systems sometimes look supernatural.

 

 Technical systems sometimes look supernatural. 



Quadcopters can also use gesture control.


One of the things that you must know about things like witchcraft and other types of things like remote-viewing is that those things can base things like hidden eavesdropping tools or informers. So there is somebody next to you who tells everything to somebody else. Another version is that there is some kind of hidden camera or recorder in the room. And those things allow that some "master" can hear everything. 

Technical equipment can use to make people look like some kind of supernatural. Holograms can simply use to create things like ghosts. And extra-ordinary-looking aerial vehicles can use to turn the head of people. The quadcopters can also be equipped with hologram projectors. So those things can make real-looking UFOs:s at the airborne. 

In the same way, the aircraft can have hologram projectors which makes them hunt the UFO. That kind of system might use when some top-secret missiles or other types of secretive technology are tested in some areas. That kind of system can lead the jet fighters away from the secured airspace. Or some flying saucers can be stealth helicopters that are used to collect secretive aircraft away from hostile areas if there are damages in those systems. 

Things like UFO kidnappings called abductions can also make by using virtual reality. Nitrous oxide and some psychoactive chemicals can use to boost the effect of virtual reality. 



A new type of interface can turn techno-witchcraft possible, 


We could almost create "Jedi" by using technical equipment. The quadcopters can control by using gestures. And if those quadcopters have hands or manipulators they can use to operate the computers. The simplest way is to make the manipulators which have cameras. Then the quadcopter needs only to know what the certain letter looks like. The operator could fly that quadcopter to the computer. And simply write the texts by using their computer. 

Or the quadcopter might have a virtual keyboard and the remote controller would connect that thing to the USB gate. Writing the commands can be made by using a remote computer. The mouse can be virtual or the quadcopter can have a manipulator that is acting like a mouse. Or the quadcopter itself is acting like a mouse. 

The system can be on wheels and a manipulator connects it to the USB gate. Then the operator can use a virtual keyboard and move the quadcopter on the table. The operator can see the screen by using the camera that is located in the quadcopter. Or the quadcopter can use the USB- and remote connection for sending the virtual screen to the operator. 

But the system can use also the interface where certain letters can choose by using eye contact. The system is called "Eye Tracker". The prototype of that system allowed the famous physicist Stephen Hawking to communicate with other people even that man had ALS. Today those systems are allowed to buy for gaming and everybody can buy them. 

The computer follows the position of the eye and the selection of the letter happens by blinking the eye. The person sees the activated letter at the HUD screen. That system can use with the portable HUD and VR glasses. 

The same systems are used for controlling virtual characters in games. Can be used to control real physical machines like quadcopters. 

But gesture controls, BCI, and other kinds of systems make that technical equipment more powerful and flexible. The fact is that the controller of the remote systems can use the WEB camera to send those gestures to the remote system. There is the possibility that the user of the system might have certain gloves for confirming those commands. 



Techno-Jedi


When we are thinking about Star Wars and Jedi who have some "supernatural force" there is the possibility that all of those kinds of things are made by using technical equipment. The flying light swords can be quadcopters which are controlled by using the BCI (Brain-Computer Interface). Or actually, there is needed the gesture control system. The simplest model uses data gloves. 

So in that kind of equipment would be only the blowers that are controlling its movements on air. And things like remote-killing can make by using a gesture activator. 

But if the person wants to make the magic tricks without gloves. That thing requires only the jewelry where is a certain type of sensors.  To the wrist is put the motion sensor or sensor that searches the electric actions in the nervous systems. That sensor can also install surgically. 

When the person moves the finger or hands a certain way the system will give certain commands. So the light sword can fly to hand when a certain gesture is made. And the other systems can communicate with portable data handling units that can hide in light sword or mobile telephone. And that kind of system can also use to shut down generators. Or remote activate the jet fighter levitation systems. 

When the person who has a certain dress and gloves makes a certain gesture that thing releases botulinum release in the victim's body. The user could activate the system by using a radio impulse that is sent from the hidden radio transmitter. Or there is a small camera in the clothes or body of a victim. And when a certain gesture is made that camera sends the signal to the botulinum capsule. That kind of thing can use to make a vision of the great witch. 


https://www.lead-innovation.com/english-blog/eye-tracking-as-an-interface


https://eu.mouser.com/images/microsites/quadcopter-capabilities-flight-theme.jpg


https://visionsoftheaiandfuture.blogspot.com/

The future of interstellar spacecraft.

 

The future of interstellar spacecraft. 




The interstellar spacecraft might not come to practice very soon. But the fact is that theoretical research. Along with careful missions is opening the road to the interstellar journey. And maybe the time of that mission will come in 200-300 years. The fact is that nobody knows when we are ready to travel to the other solar system. When we remember history many things have been "impossible".  

The Chinese knew how to make gunpowder rockets in the 200s(1). And the Russian Konstantin Tsiolkovsky(2)(1857-1935) created theoretical research with rocketry and spaceflight in the 19th century. The thing is that things like interplanetary spaceflights. And space elevators(3) are also invented by Konstantin Tsilkovsky. 




Image 2: Konstantin Tsiolkovsky (1857-1935)


That man is called "the Father of the spaceflight". Tsiolkovsky also created research and articles about "lighter than air aviation". There was a rumor that Tsiolkovsky created an idea to use the hydrogen balloon or high-flying airship to carry space rockets to the high altitude. And then launch that rocket to the orbiter or sub-orbiting trajectory. 

The hydrogen balloon can lift the rocket to an altitude of 30 kilometers. And that thing could make it possible to launch smaller rockets to the orbiter. But for some reason, that plan remained at the theoretical level. And that reason could be bureaucracy. 

The thing is that technology advances all the time. And maybe the advancing of technology is faster than we think. The thing is that the first interstellar mission might become reality sooner than we expect. The thing is that the theory is the first step for making dreams real. And modern scientists have more advanced tools and experience than Konstantin Tsiolkovsky. 

But there are many things. What that man invented and what remains at the theoretical level. One of his greatest ideas is the space elevator. That giant structure that uses the centripetal force of the Earth is ever created in practice.  



Image 3: The Daedalus and Saturn V comparison


The British Interplanetary Society. And theoretical Project Daedalus(3) and its improvements. 


There are a couple of theoretical concepts for interstellar spacecraft. All of those concepts are probes. And the carefully made steps will open the road for successful manned interstellar missions far away in the future. But the first interstellar concepts would be more primitive than some SciFi spacecraft that travels across the galaxy in a couple of weeks. 

In the 1970s the British interplanetary society created Daedalus. Theoretical spacecraft. That could travel to Barnard's star. Daedalus planned to use thermonuclear detonations for reaching that distant star. In the updated version the Daedalus uses antimatter. 

When Daedalus closes the objective of its mission. That craft will create antimatter by using an antimatter generator. The antimatter would be created by benefiting the solar wind which impacts the gold leaf. That thing turns those particles into antimatter. That thing makes it possible that Daedalus II could bring samples from the solar system where it visits. 

NASA has created an idea of "Project Longshot" (4) which is quite similar to Daedalus. As I earlier wrote the both of those theoretical crafts are unmanned. So there are no worries about the lifetime of astronauts. But of course, there is the possibility that the theoretical interstellar probe is forgotten if there is some kind of other problems on Earth.  But the time of that craft will be far away in the future. 


(1)http://en.chinaculture.org/created/2005-07/21/content_70826.htm


(2)https://en.wikipedia.org/wiki/Konstantin_Tsiolkovsky


(3)https://en.wikipedia.org/wiki/Space_elevator


(4)https://en.wikipedia.org/wiki/Project_Daedalus


(5)https://en.wikipedia.org/wiki/Project_Longshot


Image 1: https://en.wikipedia.org/wiki/Konstantin_Tsiolkovsky


Image 2: Pinterest


https://interestandinnovation.blogspot.com/

The need of the customer plays a primary role in data technology.

   

 The need of the customer plays a primary role in data technology. 



Determining the use of the system is important because that makes systems effective, and the multi-use tools might first seem more uncomfortable and slower than single-use systems. The thing is that when the power of the hardware is rising. Code that is driven on the hardware is turning more complicated. 

And the similar advantage continues in quantum computers. The need to drive more complicated code for more independently operating artificial intelligence. Causes that also the power of quantum computers must increase. The thing that makes this need is that if some researcher can leave the entire routine process for computers that releases the time of that person for some other things. 


In scientific work and other duties, boring mechanical works can be left for AI-driven systems. 


The AI can use for routine duties like observing the searching the changes in the brightness of the stars. In search of an exoplanet, most of the work is searching the brightness of the star. That thing need also the most of the time. 


That can leave the routine work like observing the changes of the brightness of the star for artificial intelligence. That thing releases the time of that person for some other works. 


x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x


The route of data handling.


Simply and easy-to-use systems that can use for fixed solutions.


Stage 1: Pocket calculator

Stage 2: Scientific calculator


The route went to modular and easy-to-use solutions. And the last things like neural networks are self-learning tools. They are needed only the data matrixes that they compile with data that the system collects from the environment. 


Stage 3:Computer or stand-alone-computer



Computer or stand-alone-computer was a great advantage: User can customize modular systems can by using the software.

Limited-environment-connected computers: Allows asking help from the workmates. But the problem is that reserves the time of the workmates. And download pre-loaded software to the computer. 

Stage 4:Internet-connected computer:


Internet-connected computer: Gives access to the unlimited program library. Open space or open-source connected systems allows independent search for data. But the worker must have skills how to analyze that data.

Stage 5: Internet-connected computer:

Neural networks are the key in machine learning: Those are learning systems. The beginning of those systems is the indexing tools of the internet. The system can recognize the situations by using certain parameters. 

Those parameters can contain information about the things that happened before the bomb strike. And then the system can compile the data that is collected from surveillance cameras to that data matrix. 

Stage 6: Quantum computers


Quantum computers. Those systems are millions of times more powerful than binary computers. They can run more complicated code and algorithms than any binary computer before. The computer can search data from the internet. And make the solutions by using open-source data. 

Stage 7: Quantum neural networks:


Do the computers of the future have imagination and feel like humans? And could they be more intelligent than humans who made those systems for serving themselves? 

Quantum neural networks: Self-learning internet- or quantum-based network interconnects quantum computers. The ability to send qubits between quantum computers is removing borders between those systems. And that thing means there is no limit to the code that the system can run.


x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x


Determining the use of the system is important because that makes systems effective, and the multi-use tools might first seem more uncomfortable and slower than single-use systems. 

When we want to compare pocket calculators and quantum computers the pocket calculator seems more comfortable and useful than quantum computers. The pocket calculators are effective tools for solving everyday mathematical problems like what is the cost of ten butter boxes. 

But if we want to solve more complicated mathematical problems like exponent and geometrical functions we need scientific calculators. Those functions are made in those calculators. And that thing minimizes the possibility of human error. 

Solving those problems can make by using regular pocket calculators is possible. But pushing buttons is more complicated. And there is the possibility that the user of the pocket calculator makes mistakes while inputting data. The reason why we are buying the calculator is that we want to make our life easier. 

Pocket and scientific pocket calculators are impressive tools for solving mathematical problems but they are limited. The abilities of those systems are fixed in those electronic tools. These calculators are the RISC tools. They are effective in some very limited sectors. But if we want to use our systems for something else, we need a new type of system. 



x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x


Intelligent mobile telephones are like pocket computers. There is the possibility to load applications for those systems That mean the system is the modular solution for the data handling problems. 

The hardware is the same. And the physical system is only the platform that runs software. The software-based solutions are making those systems flexible. And they can customize to the needs of the user. 


x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x


Sending qubits in long ranges can be made by using nano-tube-based particle accelerators. In that case, the electron-based qubits are shot between quantum computers by using particle accelerators. 


x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x


The computer is the tool that is answering the problem of multi-use tools. A single computer can use for solving mathematical problems. But there might be some kind of writing software. And that means the system can be used also for writing reports. But for searching data, the system operator requires the book. 

If that computer is connected to the network the user can ask for help from the working mates. But if the computer is connected to an open network, that thing allows that the user of the system must not borrow workmates. And searching data is easier. The network-based systems can keep a book about the most common searches. 

And that means they can index searches. That on the top is the most common search. The system can also preload the data for the users. This thing is the beginning of machine learning. 

The next steps are the neural networks. Those networks are searching for data. And storing it. The system can also collect data about the connected searches. Which often follows a certain type of search. If certain searches often follow some other searches the system can also preload those following searches. Or the predicted following searches. That thing is deeper learning than just storing the prime searches. 

The quantum computer is the ultimate tool for the quantum computer. The quantum computer is the ultimate tool. For analyzing the data that is collected from the internet. And the next step is the development of quantum networks. 

The quantum network is the ultimate connection of quantum computers. If quantum computers can communicate by using quantum technologies. That thing means that the quantum computers use the qubits also for long-range communication that means that there is no limit for the size of the quantum computers. Sending the qubit can make by using nano-tube-based particle accelerators. That is shooting electrons where the data is stored between quantum computers. 


https://visionsoftheaiandfuture.blogspot.com/

Precise logic vs. fuzzy logic.

 Precise logic vs. fuzzy logic.



The "barren plateaus" trap is one of the most cutting edge things in computing sciences. The precise data handling is working perfectly in a closed and very unilateral environment. But in the open environment, fuzzy logic is a more effective tool. 

And the power of fuzzy logic increases whenever the environment is turning larger and the data is turning more versatile. Fuzzy logic is in the key role for handling non-sorted data mass that is collected from the natural and chaotic environment. 

Have you imagined the situation that you go to the flat, where everything is in a certain order? You can't find anything from there, and then some other person comes in. That person finds everything that is needed less than the second. 

You cannot ever find anything faster than that person because that person knows everything in that room. The person knows every single nail in that flat. And that thing makes an impression.  But the prime question is does that person know anything else?

Does that person ever go outside? We can say that if the person would stay all the time at the same flat that person knows everything that is in the flat. There is no other information than the information about things that are necessary for finding things from a closed environment. 

That person would, of course, look outside sometimes, but the information that person gets is limited. There might be a car in the front of the house. But the person would not know why the car is at the front of the house. 

The limited environment is taking us to the "barren plateaus" trap. The person who lives in a closed environment seems to be wiser and more intelligent than a person who lives in an open environment. 

When the person who lives in an open environment comes into a strange space, that person must search for everything. The person who lives in that closed space must only remember where things are put. And then pick them from those places. 

In closed and very limited space everything can have certain and precise orders. But if the environment is very wide and the data is very versatile. The answer to the problems is fuzzy logic. 

In a closed environment, the person might make notes. And read where things are left. But if we are in an open environment making precise notes for things where we left something is impossible. Or those notes must be very large and that means they turn uncomfortable. 

If the person who is operating in an open environment wants to use precise logic. The person must read all notes all the time. For finding the answer to the question, where that person left for example the screwdriver? 

So the precise logic is not a practical thing for an open environment. The person must turn to use the different types of notes. That person can use multiple memo books where is marked places where is used certain tools. If we want to keep memos by sorting them by place. 

There is the possibility that we forget to put them there if we use the screwdriver. Or if we want to sort data by using places we must always read every single line in every notebook. And hope that if there is a marked screwdriver.  

But what if we sort those memos by using the name of the tools that make it possible to find the individual tool more easily. The most effective thing is to use different memo books for each tool. 

And the data is sorted by the name of the tool. When we want to find the screwdriver we must just take the memo book where is word "screwdriver". We can find the place. Where did we use that tool the last time? The searcher should find it on the last page. 

If there is a mark that the screwdriver is taken for use. But it is not returned which means that the screwdriver probably found that place. And the exact place where the searcher can find that tool. Will be found from the work order. This thing is called fuzzy logic. In computing, those notebooks are the tables of the databases. 

Wednesday, December 15, 2021

Machine learning needs stimulus for making solutions.

 

 Machine learning needs stimulus for making solutions. 




"A barren plateau is a trainability problem that occurs in machine learning optimization algorithms when the problem-solving space turns flat as the algorithm is run. Researchers at Los Alamos National Laboratory have developed theorems to prove that any given algorithm will avoid a barren plateau as it scales up to run on a quantum computer." (https://www.lanl.gov/discover/news-release-archive/2021/March/0319-barren-plateaus.php)

Have you heard of a "barren plateaus" problem? The name of that problem is coming from the J.R.R Tolkien book. Wherein the fictive Middle land is the very dry and hot place. The image, above this text, introduces the "barren plateaus" problem very well. 

In the next example, the food and water are the information. If a creature lives in the "fresh plateaus" it can take nutrients from nature. There is a bigger chance to make mistakes. But the nutrient is versatile and finding and testing new things makes the creature in the work. When a creature searches for things from nature there are many possibilities to test which type of vegetables or other nutrient sources the creature uses. Of course, that thing requires sometimes rise to the mountains. 

In that image, the problem is the mountain. The thing is that if the creature lives on the landscape at the higher image. That creature has stimulus. The green landscape offers motivation and the grass is the food and the creature wants to go to the mountain. The grass and water are everywhere and the creature has the motivation to rise to the mountain. And that could be willing to see the farter places. Or maybe the creature wants to get fresh air. 

The lower image introduces the situation. Where the creature lives in the "barren plateau". The water and food are in a pocket or bag and of course, the creature never makes mistakes if that creature wants to get a certain sandwich. The creature knows which pocket that creature can get sausage sandwich and where is the drinking bottle. But sooner or later, the nutrient would turn unilateral. In barren plateaus problem, the creature will get pre-made food. 

So if we are transferring that model to the information technology the creature will get pre-made solutions that fit in certain situations. And that makes this kind of model very limited. The situation is like that creature lives in the desert or "barren plateaus". The supporter brings water and sandwich to a certain point at a certain time. The food is guaranteed but it's always the same. 

And what the creature gets depends on the supporter. If the supporter wants to give the sausage sandwich that is the food. If someday the supporter wants to give the cheese sandwich that creature will get a cheese sandwich. 

When everything is pre-made the creature doesn't want to try itself to find food. There is difficult to make mistakes if some other person makes the food. The same way is in data science. If all problems are pre-solved that thing means that it's very hard to make wrong solutions. 

The term "flatten landscape" means that when the creature is living in "barren plateaus" the limited information sources makes the problems look harder to solve. Because the creature always is at a certain point where the supporter will bring a sandwich and water the creature is not even trying to climb mountains or solve the problem. 


"A barren plateau is a trainability problem that occurs in machine learning optimization algorithms when the problem-solving space turns flat as the algorithm is running".

"In that situation, the algorithm can’t find the downward slope in what appears to be a featureless landscape and there’s no clear path to the energy minimum. Lacking landscape features, machine learning can’t train itself to find the solution". (LosAlamos National laboratories, Solving ‘barren plateaus’ is the key to quantum machine learning)


https://www.lanl.gov/discover/news-release-archive/2021/March/0319-barren-plateaus.php


Image:https://www.lanl.gov/discover/news-release-archive/2021/March/0319-barren-plateaus.php


https://interestandinnovation.blogspot.com/


The center of the development of AI should be how that thing can serve people better?

 The center of the development of AI should be how that thing can serve people better?



The problem with artificial intelligence development is that the intrinsic value. That means that creation of the more and more intelligent machines is the prime objective. In that development process, the developing artificial intelligence is intrinsic value. 

The prime objective should be how artificial intelligence can serve humans. How AI might turn life easier and safer?

When we are thinking that AI can fully replace humans that thing is pure imagination. There are lots of things that we don't know about brains. We know maybe how neurons are switching connections and how brains are learning new things. 

But we don't know what kind of role certain things are in certain actions. The things like imagination are totally out of artificial intelligence. Even if we could model that ability to abstract thinking in theory. That thing is hard to make in real life. 

The complicated AI requires powerful computers. And the thing is that AI that runs on the quantum computer can learn things unpredicted fast. Quantum computers are millions of times more powerful than binary computers. 

The self-learning algorithms that are run on quantum platforms can make unpredicted things. And the machine that involves things that are not predicted is always dangerous. 

When we are thinking about the feelings and consciousness of the computer. We must remember that if the machine has feelings. It is dangerous. If the robot would turn conscious that thing makes that thing similar to living organisms. 

And all organisms are defending themselves if they are under threat. The AI might feel it is under threat in this case. Where its server shuts down. The AI itself is not dangerous. But if it's the system that controls things like weapon systems. It can try to destroy the people who are shutting it down. 

Making a real-world computer that has dreams and imagination is a thing that is very hard. The things like quantum computers are shown that theoretically easy things can turn difficult in real life. 

Artificial intelligence can be better than humans in certain limited sectors. AI can play chess better than humans. But humans can make more things than AI. The thing is that humans can do many more things than AI. And making AI that has similar transversal competence as humans is difficult.

There is a possibility that every single neuron in humans 200 billion neurons have different individual programming. So for making the AI that has the same capacity requires 200 billion tables for the database. And maybe that thing requires the 200 billion microprocessors. 

But of course, we could create artificial neurons by using small bottles there is some kind of microchip and quicksilver. The quicksilver will close the electric connections of those bottles. 

In that system, quicksilver is acting as a liquid switch. For making the connection in that system, the magnet will pull quicksilver at connection points of wires. That thing makes the system route data to the right wire. This is the model of an artificial neuron. 

And the microchip involves the database. That kind of system can emulate single neurons. But for emulating humans there is needed 200 billion bottles. 

Humans should be the thing that technology serves. And in the real world in the center of development should be humans. The fact is that. The development of artificial intelligence is different than anything else. Artificial intelligence is an open-source thing. Almost all programming languages are public. And that means everybody can start to make their artificial intelligence projects. 

Artificial intelligence is a powerful tool. Many people are saying that the AI steals jobs of people. The question is: what kind of jobs AI will take? Are those jobs popular? Or do those people who are criticizing the AI. Willing to make those jobs? The question is always about morals and ethics. What if somebody makes the robot for military purposes? 

So ethically that thing is wrong. But also things like nuclear weapons are inhumane. Nobody is stopping to development of nuclear reactors. Because of Plutonium that those reactors are creating can use for nuclear weapons. Every single nuclear reactor in the world is producing Plutonium. But there are no large-scale campaigns on the ethics of nuclear technology. Same way fusion technology can use for weapon research in both plasmoid and fusion explosives. 

But somehow artificial intelligence is a different thing. AI can make human lives better. The only thing that is seen in AI is the military systems that are killing people without mercy. Things like nuclear weapons are not merciless killers. They are inhumane military technology. If some person will get radiation poisoning that thing causes extreme pain and finally slow and certain death. But when inhumane weapons use by human operators it's more acceptable than some kind of robot that shoots enemies by using a machine gun. 

Robots are the thing that can misuse. They can use as riot police and military operators. The thing is that the humans who are serving in those roles are serving governments. The government makes decisions where it wants to use those things. 

But those things can also save humans. They can use as tools for giving medical attention to people. Or they can go in the nuclear reactors in the cases when there is an overheating situation. Robots can research the jungles and volcanoes. Without risking human lives. And robots can travel to other planets. Those trips take years. But for robots, that time doesn't matter. 

So I believe that the first thing that walks at the surface of Mars or icy moons of Jupiter is a robot that is controlled by very independent artificial intelligence. That thing means that. No researcher must spend a lot of the lifetime on that trip. A trip to Jupiter takes 600 days in a flyby mission. 

But if the craft will want to position itself to the orbiter that journey takes 2000 days. That means a one-way trip takes over 5 years. Return to Earth will take 5 or more years. And that means that the minimum time for that mission is 10 years. 

Of course, there should spend some time at the orbiter. If robots would make that mission. The researchers can stay at their homes and make everyday jobs. That doesn't require that human operators should spend 10-20 years away from home. That is one example of how AI can help researchers in extremely difficult missions. 


Image: https://www.salon.com/2021/04/30/why-artificial-intelligence-research-might-be-going-down-a-dead-end/


The noise is a problem for voice commanding robots.

   

 The noise is a problem for voice commanding robots. 




Artificial intelligence and especially in voice commands is one problem. And the name of that problem is noise. When people are giving orders to robots the problem is how to filter those orders which given in purpose from the noise around the area. The selection is important when the orders are given to robots that operate in the middle of people. 

The voice commands what the robot that operates at the station can consist of things that "would you step away" or "help" which means some person needs help. But how to separate the ask for carrying packages from the cases where somebody drops on the railway? And of course, there are many commands that robots should not follow in every case. The thing is that if the robots would get command to hurt somebody. 

That robot should not follow it. But the problem is how the robot separates the orders from the "ducky talk". One version is, of course, following the way that is used in the training of dogs. In that method, there is the gesture. That is given before the command. And that gesture will prepare the robot for command. 

But there is a possibility that the gesture is connected to some mark. Like ring or jewelry that confirms robot that person is allowed to give such orders. There is the possibility that in the wristwatch or ring. Is hidden the QR-code that is telling that the person has certain rights to the system. That kind of system can use when some persons are giving special orders to robots. 

The thing is that if we think. That AI-controlled robot operates on the battlefield. Or at security missions. The robot must separate orders from the noise. There is the possibility. That in those areas robots are disturbed by using loudspeakers. By faked sounds like copied speech or speech images, there is possible that enemy operators are trying to turn robot attacks against their masters. 

The man-shaped robot is one of the things that is a very interesting tool. The robot itself is a multipurpose system. And the AI or operating system determines the things that robots can use. There is the possibility that cleaner or cargo robots have so-called "ghost protocol". That means that when a robot sees things like weapons it activates the protective model. Or if somebody is in danger, The sign that is given to the robot must be clear enough. 

There is an article below this text.  There is introduced how neuroscience is used to filter noise. From the orders which are given in purpose. But the thing is that there are many other ways to make the robot separate the noise or unauthorized commands from the commands that are given in purpose. And by people who have the authorization to give certain types of commands for robots. 


https://www.quantamagazine.org/ai-researchers-fight-noise-by-turning-to-biology-20211207/


Image 1)https://techbullion.com/machine-learning-market-profile-to-triumph-existing-processes-in-nearly-all-business-verticals-manifests-stupendous-growth/


https://thoughtandmachines.blogspot.com/

The future AI cognition mimics humans.

The AI can have a physical body. The robot body communicates with supercomputers. And it makes them more flexibility.  AI learns the same wa...