Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

Friday, August 15, 2025

New drones are a challenge for security.

  New drones are a challenge for security. 


Geran 3 drone


The new jet-engined Shahed drones called Geran-3 cause problems for Ukraine's air defense. That is one of the things that the Ukrainian war has taught us. The rapid development of the drone industry can render countermeasures that were useful and effective a couple of weeks ago ineffective in current situations. Things like optical fiber-controlled drones are hard targets for defenders. 

Those things are almost immune to the normal jamming systems. The EMP pulses that can destroy electronic components or laser systems that can cut the command fiber are tools that can affect those drones. But the use of the non-coherent EMP means that their own drones and other vehicles in the EMP area are also in danger. The laser system should detect the optical fiber before it cuts that fiber. 

And fiber-controlled drones can also be put in chains. That improves their attack range. The optic fibers would be pulled through another drone, and the drone can make a chain that is hard to detect or avoid. There is also a possibility that drone swarms are controlled by using laser communication. The laser drone can be a fiber-controlled system. And it can send laser pulses to other drones wirelessly. 

The laser-controlled drones have one problem. They require straight eye contact or a fiber that transfers the command signal. Regular lasers have one problem in drone control. Those lasers must be aimed. Using high accuracy. Blinking holograms can help with that problem. The optical communication is harder to jam. But the laser rays cannot travel through the smoke and fog. Those things also cover drones from defenders. 




 The Shahed-drones are now dropping anti-tank mines. But there is a possibility that the jet-engine-powered versions of those drones can also carry anti-tank weapons. Those anti-tank or anti-radiation missiles can increase those drones’ ability to cause damage. And another thing is that those drones can use those missiles against the power stations or radar installations. 

And they can also disturb the air defense. That is the problem with weapon development. The solutions that were effective against propeller-engine Shahed-136 drones are not effective against jet engine versions of those drones. The jet-engine drone can fly much higher than those propeller drones. The next step will be  AI-controlled drones that can operate independently. The drone will fly near its target using inertial navigation. Or it can use TERCOM, the AI follows the terrain area, and the system compiles that data with images. Those are stored in its memory. 

When the drone is close to its target, the system starts to search point, that it must destroy. The system uses photographs that are stored in its memory to detect the target. And then the system orders the drone to dive. And make a kamikaze attack. Basically. Those drones can use similar systems that Javelin uses to detect targets. Image-based target recognition can be used in all types of missiles and drones. 


https://www.armyrecognition.com/focus-analysis-conflicts/army/conflicts-in-the-world/russia-ukraine-war-2022/russias-jet-powered-shahed-238-drones-introduce-new-challenges-to-ukraines-air-defenses

https://www.twz.com/air/russian-mig-29-fitted-with-an-interceptor-drone-is-a-laughable-mess


https://www.twz.com/air/russias-shahed-long-range-drones-are-now-dropping-anti-tank-mines


https://en.wikipedia.org/wiki/HESA_Shahed_136

https://en.wikipedia.org/wiki/TERCOM




Friday, July 11, 2025

Imagination limits biotechnology.



Above: Fictional spacecraft from the movie "Prometheus"

 

The DNA molecule allows the creation of new 3D materials. Those materials can be organic or non-organic. The ability to connect this new technique with the advanced biological AI, based on 3D nanoprinters and RNA or DNA-controlled biorobot bacteria can make many things that beat even SciFi movies. The new natural materials that are like coral shells, combined with spider silk and the clam and its silver glass-shaped pearl material can make it possible to create structures. Those were not even introduced in the Sci-Fi books. 

Researchers think of the possibility of growing algae on Mars. That algae can of course act as food. There is also the possibility that genetically engineered algae that have the spider silk plate and the pearl membrane combination can be used as sandbags. The genetically engineered algae can also create spider silk for ropes and clothes. The genetically engineered fungus can be used to create extremely long neural tracks that allow the biocomputers to control other systems. Those axons can operate as data transporters between microchips that use living neurons as data processors. 

There is also the possibility of creating the boxes by using molten sand. Then the system fills those boxes with sand and covers them with that extra-strong material. Those things can be used to protect the permanent structures against the micrometeorites. The sandbag or sand coffin is easier to change than fixing the pressure wall.

The biological robots can be used to transform other cells. They can travel to the brain in the case of brain damage and change their shape into neurons. That helps to fix the brain damage. 



“Artistic rendering of the assembly of designed 3D hierarchically ordered nanoparticle structures using DNA-programmable bonds (left). The desired structure and its design with optical reflection properties, and an image of formed material with reflective characteristics (top right). Electron microscopy image of the realized structure with nanoparticles arranged in lines, separated at half-wavelength of light (bottom right). Credit: Oleg Gang” (Phys.org, Need a new 3D material? Build it with DNA)

Biological systems like electric eels’ electric cells can create electricity at least for laptop computers and portable systems. The biological robots that are based on biotechnology are formed of living tissue. But their brains are not advanced. The system can control those brainless creatures using microchips that are at the top of the spine. And theoretically, biotechnology can be used for larger-scale space technology. If we think about satellites that use biotechnology, those satellites can use solar cells for energy production. 

Things like genetically engineered living creatures can use the thick capsules to protect themselves against dryness. A thick membrane that is similar to the silver membrane that the clams make can solve the dryness problem. The membrane must be so tight that evaporation is impossible. The suit or skin that this hypothetical creature has can create the membrane. 

The bioships are the new tools for many things. There is a possibility to put microchips into the whales and fish's brains. That kind of system allows them to control those creatures. Large-size whales can act in the same way as submarines. And in some pictures, an orca is introduced that carries a torpedo launcher. Birds and other animals can do everything if humans control them. Bioimplanted microchips in bears’ or wolves' brains can turn werewolves into reality. The bio- and genetic technology makes it possible to create biological computers. 

In some visions, we might make a bioship somewhere in the future. In the simplest possible models the DNA can create the metal web around the biological structure. And in some jokes or futuristic visions. The saucer-shaped craft can be the clam-like oyster that is made of biological components. 

The bioship that includes a surface that is made of pearl, connected with spider silk and chitin can make it possible to create biological structures that can operate in space. The control tables of the bioship can be the glowing bacteria and bioluminescent bacteria can operate as screens of the bioship. The ion system can have the electric cells as power sources and the system can use the neural fibrines to transport electricity to the system. The craft can look like a horseshoe. 

The electric fields put ions around the craft to rotate opposite ways and that makes the ion cloud in the middle of the structure. That ion whirl can push the craft forward. When the system does not need those control tables it can melt them apart into that entirety. In the same way, when those systems don’t need their crew the craft can remove those bodies and melt them into its entirety. The living neurons are things that act as the computer that makes this kind of system very intelligent. In that model, the biocraft can be an alien for its creators. Maybe some of those visions will become reality sooner than we expect.


https://phys.org/news/2025-07-3d-material-dna.html


https://scitechdaily.com/groundbreaking-biological-artificial-intelligence-system-could-make-impossible-medicines-real/


https://scitechdaily.com/harvard-scientists-grow-algae-in-mars-like-conditions-paving-way-for-space-habitats/


Tuesday, July 8, 2025

Training determines the abilities of the AI.



The new remote-controlled robot uses technology that allows it to teleport all human movements to that robot's body. That thing turns so-called external, or exobodies true. That nobody can operate in every mission where humans can. This kind of body can operate as an AI-training tool. The operator controls that robot in different situations. Then that system can share its data with other full-automatic robots that operate independently. In the robot world the word “independent” doesn’t mean necessary the same as in the human world. An independently operating robot is a robot that can do its duties without human assistance. That kind of system can operate under the control of data centers. 

So, those systems use the internet for the remote control of the robot body, and that computing capacity is not high enough to run complicated algorithms that do things like walking in the city and visiting shops to buy things. The fact is that the training of the AI determines its abilities. The human-shaped robot is like a screwdriver. The AI that controls that system is the thing that makes it capable of operating in many situations. The first real robocops are coming to the streets in Indonesia. 

Many armies and civil defense and rescue organizations test robot dogs that can search things like mines. Those systems can carry lasers or even missiles in the back. So that makes them able to destroy tanks and other vehicles. Those robot dogs can have quadcopter propellers in their legs. The system can turn the twin wing rotors in the same line. And then pull them in. In use, those systems can push propellers out. Turn the legs to the side. And then turn those propellers into an X-position. 


Researchers trained the Chat GPT for space pilots. That system did its mission very well, and that means the new drones can get AI-based autopilots. And in the wrong hands, those systems are terrifying. That means basically anybody can train their own AI-based aircraft pilot that can operate drones or, why not, full-scale aircraft. There are risks in those applications. The same autopilot program that controls the drone can control full-size aircraft. 

Some researchers are worried that AI chatbots can start to create biological weapons. Those systems can make almost everything. AI-controlled laboratories are things that can revolutionize material and molecule research. That means that those laboratories can also connect the DNA molecules together. Those systems can also make artificial organisms using AI-controlled nanotechnology. 

The self-replicating or self-amplifying mRNA molecules are created for self-replicating vaccines. Those self-amplifying (m)RNA (sa(M)RNA) can also unlock new tools for controlling nanorobots if a small miniature robot can carry saRNA molecules to the right cells. And that thing can make it possible to create the system that orders those cells to die. That mRNA molecule can order the cell to shut down its protein synthesis. Those self-amplifying molecules can also act like chemical computer programs. And that makes new tools to control nano- and biorobots. 


The ability to produce and self-replicate mRNA molecules in a dish, outside the cells. That opens new paths to genetic engineering. And another thing is that this kind of ability makes it possible to create new mRNA and DNA-based computers and data storages. 

There is a possibility to create the artificial mRNA molecule that self-replicates in the dish. That thing makes it possible to create the mRNA that reprograms the cell. There are two ways to make a large number of artificial cells. The first way is to create artificial DNA or mRNA molecules. The researchers can change the DNA from inside the cell’s nucleus. The other way is to use the mRNA molecule to reprogram the cell organelle and force it to create artificial mRNA viruses. 

The new method is to create artificial mRNA or DNA molecules. Then that molecule can self-replicate. Then researchers can take cells in control and inject those mRNA molecules into cell organelles. Or they can exchange DNA from the cell nucleus. That makes it possible to create artificial cells. The difference is how to produce that genetic material. In the last case, the genetic material is produced separately from the cells. And that makes this genetic material easier to control. That material opens new doors for DNA and mRNA-based computing and controlling miniature robots. 


https://www.birmingham.ac.uk/news/2023/ai-could-be-used-to-develop-bioweapons-if-not-regulated-urgently-says-new-report


https://futurism.com/scientists-chatgpt-controls-spaceship


https://www.rudebaguette.com/en/2025/06/real-life-avatar-tech-arrives-new-capsule-teleports-full-body-human-motion-directly-into-remote-controlled-robots/


https://www.rudebaguette.com/en/2025/07/theyve-gone-full-robocop-indonesia-unleashes-humanoid-police-robots-to-hunt-criminals-and-crush-the-drug-trade/


https://www.science.org/content/blog-post/first-self-amplifying-mrna-vaccine


https://en.wikipedia.org/wiki/Messenger_RNA


https://en.wikipedia.org/wiki/Self-amplifying_RNA



Wednesday, July 2, 2025

There is the possibility that the AI turns non-predicted.

 

There is the possibility that the AI turns non-predicted. 


The AI can turn dangerous because it cannot think. And then another thing is that the AI becomes dangerous if it can think. Thinking AI can process data automatically. That means it can create unpredicted actions. The problem is that the AI that thinks must have something that determines its actions. That thing is the law book. But even if the AI thinks like a human, it must have orders to search and compare the queries with the lawbook. 

The AI is like a child. It cannot know the sources automatically if it learns and thinks like a human. If we ask a child to do something, can we expect that the child will take the lawbook from the shelf and then search if that action is legal? The AI will not make any checks without orders. And that is the blessing and curse of the AI. The AI makes only what its operators order it to make. This makes the AI “trusted”, but there is also a possibility that the AI is in the wrong hands. 

North Korean intelligence can make the cover-up company in some EU cities. And then get the user rights for the AI systems. In that case, the company will not control things that the AI makes. If the AI uses the lawbook to check the query's relationship with law there is a possibility that the company links those lawbook links to faked lawbook homepages there that operation is allowed. The wrong user can cheat the AI to turn dangerous. 

AI can turn dangerous in the wrong hands. The North Korean hackers taught the Chat GPT to cheat the BitCoin companies. And that tool stole money from BitCoin investors. There is a possibility that those hackers can use the AI tools against other systems.  The North-Korean case is not the only one, where the security of the AI is broken.  That is the new thing in hacking. There is a possibility that hackers train the AI assistants to break into the systems that look secure. The fact is that the AI doesn’t think. It imitates humans. But the AI cannot think like humans. 

Because the AI cannot think it is possible to cheat to make things that it should not. The AI just follows its protocols. And that makes it dangerous. The AI will not automatically search law books and that makes it the tool that can operate against the law. There is a possibility that the faked law books allow the use of AI to create things that are illegal. 

Because every skill that the AI has is macro, that means the operator must only cheat the AI to activate a certain macro. And then the AI will not make resistance. AI is a tool that faces lots of criticism. But the thing that takes the bottom out of that criticism is that the people who introduce criticism start their own AI project next week. Every single company in the world is fascinated by AI. AI is the tool that makes people more effective. 

They say that AI is the next-generation tool that transforms everything. And then we face calculations that the AI can increase productivity and other things faster than anything before. And if the company doesn’t follow that trend they will lose their effectiveness. AI has turned into the dominating tool for the business environment. And that thing makes the AI dangerous. 

Business actors will force almost everybody to choose and use AI. And then we face an interesting thing. At the same time when somebody wants to push brakes for the AI development some other actor will turn to use AI as a control tool. When we talk about thinking and imitating, we can say that imitating offers a better solution than thinking, if we take the point of view from the company leaders. 

The AI has no will. That means the AI should not deny anything that operators order it to make. But there are cases in which AI refuses to make something. Sometimes the action that the user asks is reserved for users who have privileged accounts. They are reserved for the paid accounts. Or those actions are given by unauthorized users. So the AI can refuse to shut it down because the user has no right to shut down the server. 

Some people said that this person created an AI assistant that was a better coach than anybody before. This can be right. But why is AI a better coach than humans? There is a risk that the AI pleases the user more than the human coach. There are tales about AI as therapists. The big question is what the AI should do if the customer says something that can cause a trial or crime report. AI can be the tool that will never turn angry. But the big problem is this: What are the limits of AI? And when should AI destroy the privacy of people who use it? In some visions, all actors on the internet have AI assistants, who advise the user. 

That assistant can observe how long the person spends with other things than the work duties. But the big problem is this: what if the company pays for that kind of AI assistant for the worker’s home computer? The operator can simply add the worker’s home computer’s access account for the allow to use the company’s AI assistant. There is a possibility that the AI assistant can simply use stealth mode to observe the user. And then it can send that data to the company’s computers. The problem with AI is that it must be open. There are always some people who want to use this kind of system to observe other people. 


https://www.rudebaguette.com/en/2025/07/ai-in-the-wrong-hands-north-korean-hackers-exploit-chatgpt-to-steal-millions-while-malaysian-funds-vanish-in-digital-heist/



Thursday, March 13, 2025

The future AI cognition mimics humans.



The AI can have a physical body. The robot body communicates with supercomputers. And it makes them more flexibility. 

AI learns the same way as humans. The learning process and its power depend on the diversity of the information. AI requires versatile information from multiple sources. And when we think about AI. And its ability to learn things. 

We must think about it. Why and where we learn. We can try everything ourselves. But there is another way. We can network with other people who do similar things. And then we can share our experiences and thoughts with other people in that network. 

We might have a good education. But we go to learning meetings. We learn from other people's experiences. Sharing information makes networks effective tools to learn things. In that model, the single actor must not make and know everything. 

When we talk with other people we can expand our view of things. We get more ideas when we meet other people and share our thoughts. 

We can work with those ideas. And mix them with our environment. That thing extends our corridors and predisposes us to new information. Information plays a vital role in the learning process. If some actor in the network, which can be human or some server faces something. That thing can be shared with another network. If the server is under network attack the system can collect all data from that event into its memories. Then it can share that data all over the network. And other actors can mimic that server to defend themselves against similar attacks. 

Similar way. AI should talk about other things. If the AI is just a language model. It has limited ways to learn things. Mainly large language model learns in a verbal way. And that is a very limited way. It's easy to write things to LLM and order it to do something when something happens. This can be enough in cases where the AI should detect and defend against network attacks. 


In second image is the vision of a robot that operates in the Kuiper Belt. Those robots can have quantum computer brains. The Kuiper Belt is a good place for compact quantum computers. 

That program creates a reflex to the system. When something happens that system reacts like it is programmed. Think about a case in which you should explain everything using words. That thing is possible. But it's more limited than if the AI can use images or films in the learning process. 

That means the AI can learn things from the homepages. And maybe from surveillance cameras. But if the AI has a physical form like a robot that interacts with a server, that runs the AI that extends its ability to learn things. The AI learns things visually. By connecting certain images or things with certain actions. That is a more versatile and easier way to teach things to AI. The physical body that communicates with the server can be discussed with people. 

The robot body can keep in contact with the LLM. The system can operate remotely. The LLM works in servers or in morphing neural networks. Those servers can be in the bunker. Or the system can use non-centralized computing. That means the system can share responsibilities all around the robot groups. The system just connects robots computers into entireties. 

In some futuristic visions, humans will fly to the Kuiper Belt to make quantum computers in that cold and stable environment. In the Kuiper Belt, every metal is in superconducting condition. So that means even human-size robots can have quantum brains. That gives them extremely powerful computing capacity. The low temperature with a static environment makes the Kuiper Belt a promising place to make quantum computers. 


 https://bigthink.com/the-future/ai-cognition-and-the-road-to-meaning/

 

Thursday, February 9, 2023

The AI detects alien signals from data.



The alien signals and SETI program are some of the most interesting things in the world. There is a couple of unique signals whose origin is unknown. And some researchers introduce that signals like BLC-1 and Wow-signal are some kind of emergency signals whose origin is unknown. 

Some studies are saying that probably human interference is behind the BLC-1. But because the signal doesn't repeat. The BLC-1's origin remains open. 

"When we fed our AI a previously studied dataset, it discovered eight signals of interest the classic algorithm missed. To be clear, these signals are probably not from extraterrestrial intelligence, and are more likely rare cases of radio interference". (ScienceAlert.com/AI System Detects Strange Signals of Unknown Origin in Radio Data)

There are no confirmed alien signals, but there are eight suspicious cases. Those BLC (Breakthrough Listen Candidate) cases cause discussion about the origin of those signals. The only fact that is confirmed is that those signals are unique. 

That means the same thing that supports the theory of alien civilization as their origin. And their natural origin. The problem is why those signals are not repeating. All natural reactions happen simultaneously. So the reaction or the case that is behind those BLC signals must happen only once. So that thing causes discussion. 


Are BLC signals some kind of emergency signals? That could explain why those signals are unique. 


If that BLC signals like Proxima Centauri signal and the Wow! the signal is forming in purpose, and the alien civilization is behind them. What kinds of signals and messages are unique? Are those BLC signals some kind of emergency signals? That explains why w hear them only once. 

When we say that some signals coming from Proxima Centauri, we don't mean that signal comes from Proxima Centauri. They come from the Proxima Centuari's direction. Making accurate measurements requires that the signals receiving time are long enough that the radio telescopes can make the triangular measurements. 

SETI (Search For Extraterrestrial Intelligence) uses radio telescopes to listen to possible alien broadcasts. And there is one difference between human and alien broadcasts. Nobody knows how long the BLC-1 signal from Proxima Centauri's direction, captured in December 2019 lasted. And because nobody confessed to being behind that signal, it remains a mystery. 

And that means nobody knows how much that signal remained out of the recording system. The biggest difference between alien broadcasts and human broadcasts is that aliens will not tell when the broadcast begins. And that means the broadcast begins suddenly. The second problem is how to separate alien broadcasts from other radio signals. 

The third problem is how to decrypt that message. Nobody knows anything about the encoding system that hypothetical aliens use. Nobody knows anything about their culture. 

Nobody knows when the alien signal began. That means that breaking their code is very hard. But before that, researchers must detect the alien signals. Before that, they cannot open the code. And the problem with that process is that nobody knows how much information is left outside the telescopes. 


https://www.sciencealert.com/ai-system-detects-strange-signals-of-unknown-origin-in-radio-data


https://en.wikipedia.org/wiki/Breakthrough_Listen


https://en.wikipedia.org/wiki/BLC1


https://artificialintelligenceandindividuals.blogspot.com/


Sunday, January 29, 2023

Rentokil created "face recognition" for rats.



The same system used for detecting individual rats can use detecting many other things. That system can track the merchandise in the warehouses. It can see if somebody opened some boxes or boxes are changed. 

The system can detect the details of individual firearms. Every metal item's surface is full of scratches. Those scratches form the individual image. And the AI can use that image to detect individual firearms from all images it can see. That means the AI can track illegally sold firearms from the crisis. 

A New AI-based program can separate individual rats. Its mission is to find paths that city rats are using without the need to implant those rats. The AI-based recognition system can be connected to any surveillance camera. And the same system can operate with drones. That makes it very easy to detect the paths that city rats use. 

But the same system can use for predator research. That kind of system can detect individual predators. And if some predator like a bear or wolf attacks pets. That system can use to find that individual. And that allows targeting the force to that member of the pack. In this model, the camera takes an image of the predator that comes to the yard. Then other cameras can use to track that individual. 

That allows for minimizing damage to the predators. The problem is how to find individual members of the group. That makes damage or is dangerous. If hunters shoot all other predators. But that single one stays free. That means the damages continue. The image of that individual predator can load onto computer-controlled rifles that launch ammunition only when that individual harmful predator is in sight. 

The same system that use to recognize individual rats can use to recognize cars or vehicles. Even if the car is painted the system can inform how high percent is that the car is pictured in some other place. The system can detect if vehicles are used in prohibited areas. Or they can use it to compare images of firearms or other items. That allows law enforcement to track where criminals get their stuff. 


https://www.theguardian.com/business/2023/jan/21/rentokil-pilots-facial-recognition-system-as-way-to-exterminate-rats


https://webelieveinabrightfuture.blogspot.com/

Saturday, January 14, 2023

Maybe in the future, supercomputer centers can have human-looking robots under command.



Maybe in the future, supercomputer centers can have human-looking robots under command. The purpose of those robots is to make repairs in those systems. And their mission is also to protect those high-power computers against attacks.

There is a vision of robotics and their interaction with supercomputers. The idea is that the supercomputers might have external bodies or remote-control robots that serve those machines. The idea is that those robot bodies are making repairings for those machines. That thing makes it possible that the industrial- and other secrets of those machines are easier to keep safe. 

Those physical robots are also acting as communication tools between supercomputers and humans. The idea for those robot middlemen between supercomputers and humans is taken from the Alien movies, where human-looking robots are acting as middlemen between spacecraft and their crews. 


Do you remember "Sophia" an interactive AI test robot from the year 2016? What would that robot make with Chat GPT?


Sophia  was or still is one of the most fundamental development tools for AI. The robot itself does not need powerful computers because Sophia can connect with the supercomputers by using wireless networks. 

That makes this kind of robot safe. Cutting communications makes robots unable to operate. And that means Sophia  itself is like some kind of cell phone that the AI uses to communicate with people. Sometimes I wonder what "Sophia" would do if it's connected with the Chat GPT AI-communication software. The Chat GPT would give a boost to that system. 

The thing is that the artificial organic neurons that are almost like human neurons can change the game in robotics. The ability to clone neurons is the thing that makes independently operating robots closer to reality than ever before. Those artificial neurons are making it possible to create an artificial brain that can learn things like the human brain. 

And even if that kind of system might be in the future. They are a reality someday in the future. There are two lines that those systems can be. 

Living neurons can control robots externally. Or researchers can install those neurons in the robot's body. That thing makes it possible to create the biggest opportunity and same time the biggest threat that humans ever created. 


https://scitechdaily.com/artificial-organic-neurons-created-almost-like-biological-nerve-cells/


https://en.wikipedia.org/wiki/Sophia_(robot)


https://shorttextsofoldscholars.blogspot.com/


Sunday, January 1, 2023

Could the spacecraft be a cyborg?

 


Some researchers introduced an idea to send genetically engineered microbes to Alpha Centauri. In the most incredible ideas, the researchers send the small probe to the nearest solar system by using laser rays. The laser ray will push that small probe to route to Alpha Centauri. 

Sometimes introduced an idea, that by using cloned neurons is possible to create spacecraft that intelligence is similar level with humans. The problem with those biological computers is the aging of the DNA. And stabilizing the DNA is the key problem in interstellar space missions. 

Cosmic radiation fibrates the genetic material of those cells. That causes damage to the DNA. And this is why there must be something that protects those cells against cosmic radiation. 


There are two ways to make that thing real. 


*The system can use cryogenics. In cryogenic systems, those neurons are feezing in temperatures near zero kelvin. Also, cells that create nutrients for those neurons must be stored. 


*Or the system can use active cells. That requires the closed nutrient cycle that cleans and recycle the nutrient for those artificial brains during long space journey. 


In the most conventional version of this cyborg android UFO, developers store those brain cells in the spacecraft. They will put those cells in the cryogenic capsule. During the long journey, the AI controls the craft. When that craft is near its destiny the AI will melt cells. 

And then they would get nutrients for the mission. In that version, the required information is stored in those neurons. The idea is that the BCI (Brain-Computer Interface) will use to teach those neurons. The system requires a complicated closed nutrient cycle, which is very difficult to make. 


*********************************************'

Genetically engineered brain cells can transfer memories between humans. 


The idea about genetically engineered neurons that hybridized with fungus as the brain of robots or robot spacecraft is taken from the myth that "some Egyptian god is a fungus". The source of that myth is that ancient Egyptian priests used psilocybin fungus for hallucinations. 

And in some versions, the fungi were some of those gods. Maybe some priests that used those mushrooms too often told those stories. But that myth is used for making models for parasitoid aliens. 

The idea is that the neuron-fungus hybrid can take other creatures under control. And one of the theoretical parasitoid aliens is mushroom-fungus hybridization. And if those neurons could travel between bodies. 

Some futurologists introduced a model that memory neurons that took from the brain or where the BCI-system transmitted information can transfer memories between humans. The idea is that researchers can transplant those cells between humans. And at the same time, the memories travel to the new body. 

That thing could explain the metempsychosis. The thing is that metempsychosis is in this version the memory cells that are traveling from the dead body to the next host body. Maybe that thing is possible in the future. 


***********************************************

But could those neurons be awake during the mission? 


In the most incredible versions of that spacecraft, the cloned neurons will control that spacecraft. The idea is that there are three layers of cloned neurons. That are forming artificial brains. The system requires that neurons can get nutrients during the flight. And of course, dead neurons must be replaced by new ones. 

For success, that kind of system must have a closed nutrient cycling system. That system can base on the genetically engineered neurons that hybridized with fungus. The system can use bacteria as nutrients and the cells can have pre-programmed genomes inside them. At first, the cells start their lives as part of the fungus that takes nutrients for neurons. Then that fungus cell transforms itself into a neuron. 

And after that, its mission will be to replace the dying neurons. The system will recycle dead neurons into nutrients for bacteria that are feeding the ecosystem. This kind of cyborg can be possible in the future. And in some visions UFOs:s are machines that are driven by AI or living neurons. 


https://www.space.com/interstellar-probes-microbes-other-stars


https://shorttextsofoldscholars.blogspot.com/

Sunday, April 10, 2022

GPS and artificial intelligence are the ultimate combinations.






Artificial intelligence along with the GPS can make many things more effective. Artificial intelligence can observe warehouses, and if storages are low the AI can order replacements. The AI can also observe where the cargo is traveling and locate each point of thing by using the GPS. The GPS can install on the computers like laptops. And that allows the authorities to know all the time where their stuff is. 

The GPS can install in cars and car keys and the GPS can help to locate almost everything that flies or travels by railroads and what is at the offroad conditions. The GPS can help things like robots and drones take their places in the entireties like drone swarms. The GPS allows the robots to navigate freely in their operational area. 

The GPS is one of the central components in modern military systems. The GPS can deny the attempts to steal the grenades. But the same GPS can be integrated into the guidance systems of those grenades. The GPS-guided grenades and bombs are extremely high accurate weapons. 

The GPS makes it possible that the smart ammunition can detonate precise right point and right altitude. If the mobile howitzers and the GPS systems of the grenades are communicating the howitzer can shoot the target at just the precise right point and moment automatically. If the Javelin-type missiles are equipped with both AI and GPS. That thing makes it possible to shoot them over the buildings. 

The image of the target. And its location can be given by many separate systems. From satellites to man-portable cameras or even mobile telephones can use for that purpose. When the missile flies to the target its optical seeker looks at it. And when the visual seeker gets a positive ID the optical image-homing system will drive the missile to the target. 

The accuracy of the GPS depends on how often the system updates the target's location. In GPS-based weapon systems, the accuracy of the GPS is the key element. The best choice is to use the small-size lightweight GPS that is put on the top of the tank. The fast-updating GPS makes it possible to shoot moving targets. Or put on the clothes of the targeted person.  

That GPS can install in the right place by using small robots or agents that can put that GPS in the target's pocket. In the last case, the shooter can use the rifle where is the GPS for targeting the target. 



The GPS is the key element also in the most modern rifle grenades and smart rifles. If the location of the enemy position is known the shooter can simply shoot the intelligent rifle grenade over the target. And there the system will detonate that ammunition. The pressure strike of the RDX explosives is extremely powerful. 

The GPS can also make it possible to aim the rifle in the right position. The modern GPS is very small and the bullets can also be homing. They can be equipped with GPS. And they can fly to the target by using a similar system to the GPS-guided grenades. 

If the system knows the location of the target. And the rifle makes it possible to aim the rifle by using a simple screen. There is a diagram that the shooter can use for aiming the weapon. 

Unless a well-known tracking point, this system uses two GPS. The shooter will get the distance and position to the target. This system can use in the GPS telephones. 

The aiming image can make on the screen of the telephone. Or the shooter can use the screen in the smart scope. The bullets can be small-size missiles that are equipped also with GPS. 

When the weapon is in the right position two crosses on the screen are at the top of each other. And that makes it possible to shoot the target with very high accuracy. That thing makes the sniper rifles and small munitions more lethal than they have ever been before. 

Images: Pinterest

Wednesday, April 6, 2022

Good and bad AI.




The thing, that makes AI so powerful is that it's the computer program. That system can operate on any suitable platform. So it can interconnect multiple devices together. 

That means if the AI software is ready. That thing can download to any computer in the world. AI plays a very important role in the world of tomorrow. Those algorithms are controlling the use of electricity, the more and less autonomic cars and new types of weapons can program by any person who knows how to make the computer program. 

AI is making society also vulnerable to cyber-attacks. And in the AI-driven world, the computer virus or malicious code can be dangerous because the AI can affect multiple things. The AI can observe large areas, and it can connect all devices that are operating in the same network segment to an entirety that can share information without limits.

And that makes those things so powerful. The AI can operate the entirety of the traffic. And that makes driving more economic and comfortable than it's now. The AI can also slow down servers and disconnect the microprocessors at the time when a thing like an internet nexus or root servers are not in full use. That thing makes them more economic and environmentally friendly. The AI can also make home operations more energy-friendly than it's now. While the person is at work, the AI can decrease the room temperature. And it can turn off lights when everybody is outside. 

The AI can also make reports to the owner of the house if somebody opens the door. Also, algorithms can detect things like when the grass is too long and order the robot to cut it. There is the possibility that in the house operates a human-looking robot, that will get information when its master will come home. And that robot can start to make dinner when its master comes home. The robot can time its actions so that when its owner comes home that person feels comfortable. The robot can also act security guard. But the problem with robots is that they are multi-use systems. 

When we are thinking about the drones that are carrying food to homes. We do not always remember that the same drone can drop bombs. The drone can get to the point where it should lay its cargo from the application that tells the GPS coordinates to the drone. The same system that lays the cargo can easily be modifiable to drop bombs. 

Artificial intelligence is a computer program as I wrote earlier. That means that even small size devices can use complicated computer software. And things like Javelin missiles are a good example of AI-driven small-size weapon systems. 

The Javelin can also install under small drones and it can use in remotely operated shock systems. The regular aircraft can carry that thing deep behind enemy lines. And it can be extremely dangerous. 

The power of AI depends on the computer code but also the power of the electronics and especially the CPU are putting limits to use that thing. But the small-size missiles like "Stingers" can have complicated homing programs. And that thing makes those systems more effective than ever before. 

But cyber defense is extremely important when the AI-driven weapon systems are operating. The computer viruses that can slip into the weapon make it useless. 


Monday, March 7, 2022

Synthetic AI-controlled evolution and remote-controlled robots predicted in Stanislaw Lem's novel "Peace on Earth"


New artificial intelligence can develop itself by using Darwinian style evolution. When we think about autonomous synthetic AI-controlled evolution we must remember that modern computing systems are much more powerful than ever before. 

The new complicated AI-based algorithms can search things from the Internet. That allows them to develop themselves. And that thing can make those systems more powerful and more unpredictable than ever before. 

The model of AI-controlled evolution is not quite new. Stanislaw Lem's Novel "Peace on Earth" is one of the darkest predictions about robotics. The theme of that book is that the data networks that are out of control will start to develop new types of systems that are not under nobody's control. 

The novel is telling about a world where is peace. And the weapons are transported to the Moon. But there the systems that have artificial intelligence that can create the intelligent AI-controlled evolution that can create the systems that people cannot predict. And there is another thing that makes this novel very interesting. The thing is called "mixed reality". The "mixed reality" means the robots that are controlled through virtual reality or BCI (Brain-computer Interface). In that novel, those robots are part of everyday life. 

Those robots can use for remote sex. Or the controllers can use those robots as remote bank robbers. The new remote-controlling systems make it possible to use remote-controlled robots for surgical operations. They are planned to use as rescue operators or remote-controlled commando squads. But when robots are coming more common. They might get in the hands of outlaws. 

The AI-controlled machine evolution is possible to make by using existing technology. The combat systems are the easiest to use as an example for the products that are developed by using AI. 

There are three parts to this system. The names for those systems are taken from the novel "Peace on Earth". But before we are going to think about the model of that system. We must understand that the artificial intelligence-based development system can create two things separately or together. 

The AI can develop the physical system itself. Or it can develop the operations of the physical system. In that case, the system selects the most successful actions. In the case of combat drones, that thing can mean the longest operational time against the manned aircraft. Or it can develop them together. But the development system can consist of three different parts. 


1) The super simulator. The system creates virtual aircraft. Or some other system for making the simulated combat against the virtual enemy. 


2) The selective simulator. That simulator goes combat against that virtual system. If the super simulator loses the selective simulator will send the report to the super simulator that knows why its system loses. And then super simulator can make repairments for the product. 


3) The "judge". The system would search which system handles the situation best. Then the "judge" will send the winning model to the selective simulator. Then the super simulator will start to develop a system that can win the last model. So the systems are playing ball with the products. 


This system emulates human brains. And that thing makes it very effective especially if it uses quantum computers. 

That makes this kind of system very effective. Quantum computers can run complicated and heavy artificial intelligence tools. 

The super- and selective simulators are big brains. And the "judge" is the cerebellum. That system's purpose is to select the best of the products. 

The system can collect data from systems. And see what is best in the business. And use that data for making an even better system. 

The researchers are creating new and effective ways to control robots. And some of those methods are basing the cortex simulation. That means the user of the robots cannot make the difference of the robot's senses. Or the senses of the own body. Those systems can be the salvation. Or they can be the worst nightmare that mankind ever created. 


https://bigthink.com/the-present/automl/#Echobox=1646283803


https://techxplore.com/news/2022-03-human-robot-interaction-merging-reality-robotics.html


https://thoughtandmachines.blogspot.com/


Saturday, March 5, 2022

The fire ants can give ideas for swarming robots.



The ants and other swarm operating bugs are used as models for swarming robots. When we think of the independently operating drone swarms controlled by artificial intelligence. Those systems did not exist 10 years ago. Even the small-size quadcopters and other drones can operate as a swarm. 

The nuclear-powered drone swarms can operate in the atmosphere of other planets. They can operate underwater conditions in the Marianna Trench. And the icy oceans in the Jupiter and Saturn icy moons. But they can also be new and powerful tools on the battlefields. 

The thing that makes this thing possible is the network-based system called "distributed calculation". Developers created this system for making animations for computers. The idea is that the computers can share their capacity and resources with other computers. 

The network-based system allows making the drone swarm. That can connect data that their sensors are delivering. The drones can stay on their operational area as the plate. When one of those systems is seeing something interesting. It shares that data with all others. 

Small-size quadcopters might have only one or two sensors like regular CCD, infrared, or some kind of radars. The quadcopter can also carry seismic or acoustic sensors. That means those drones can lay to the ground and feel the seismic oscillations. 

Those systems can also eavesdrop on enemy speech. The quadcopter can also have chemical detectors that allow them to find the ammunition or fuel dump. The thing is that the large numbers of those swarming robots are replacing the quality. 


It's possible that in the future. There are three types of aerial systems. Or actually, those systems already exist. 


1) Simple, one mission drones

2) Multimission drones

3) Manned systems


Those systems are integrated as an entirety. The network-based systems allow sharing the information between components of the military systems. The aerial systems are also integrated to the part of the entirety where the sea and land troops are also participating in the network-based battlefields. 


x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x



The hierarchy of aerial systems can be like this:


1) The simple, and cheap one mission drones. That is pulling missile fire away from the more complicated systems. Those drones can also be used as miniaturized killer robots. Those quadcopter-looking systems or loitering ammunition called "kamikaze drones" are the systems. 

Those terrorists are using most probably. The problem with those systems is that they are thought of as cruise missiles that are making more curves than normal cruise missiles. 


2) Multi-purpose drones can be remotely controlled systems. Or independent operating AI-driven systems. Those systems can use as remote-controlled or independently operating attack aircraft. 

Or they can have the internal warhead. They can operate alone or with manned aircraft. The loyal wingman drone's mission is to pull the missile fire away from the manned aircraft or attack against missile stations. 

The drones can also make the plate under their aircraft makes more difficult to shoot them down. The quadcopters can have infrared lights that give masks for stealth fighters.  

And they can spray iron powder into the air making it difficult to see the stealth aircraft. That is behind those drones. The stealth fighter can make the strikes by using satellite-guided weapons. 


3) The manned aircraft.  The loyal wingmen drones can be used as "kamikaze drones" or remote-controlled auxiliary jet fighters. The controller of those drones can sit on the back seat of the stealth fighter. Those drones can share their data between the computers of the jet fighters.

But they can also be 1000 kilometers away in the command centers. That thing might not be very sportsmanlike. But also in the Spartan army, the idea was that if somebody attacked one Spartan, the another would hit that attacker to back by using the spear. 


x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x


Social bugs like ants are giving data on how the system should operate as swarms. The swarming robots can close the route of the jet fighters. And the thing that makes them more powerful than regular sensors is that those quadcopters can fly in jet engines. 

When we are thinking the technology behind those quadcopters is that the individual member of the drone swarms must not be versatile. The entirety is versatile because those quadcopters or other types of drones might have different types of sensors. 

The drone swarms are complicated things. They can produce at the combat zone by using 3D printers and automatized manufacturing platforms. Those systems can detect things like missiles in seconds if they are in position. Drones can deliver from satellites and the thing is that those systems can assassinate individual persons. 

In those systems, the image recognition system will locate the target. And the small drone will drop from the satellite. The heat core denies the burning of that system. The quadcopter can be equipped with a regular pistol that makes it a very suitable and dangerous system in the wrong hands. 

Drones are effective in conflicts. They can make sudden attacks from the sky by using smart weapons. This thing makes them feared and brutal systems. But the defenders of the drone attacks are saying that the purpose of weapons is to fear enemies. Some people are saying that drones are easy to shoot down. 

Or they are easy to affect. If we think that the drone that costs the same price as some car hits by a missile that price is over a million euros that thing can be acceptable. One of the purposes of drone swarms is to pull missile fire to them away from the complicated manned aircraft and drones. 

But when we are thinking about the situation in Ukraine where are no swarming drones we must ask one interesting question. If drones are easy to destroy or jam. 

Why don't the Russians destroy those drones? Why do they allow the Turkish-build drones to fly around their vehicles, and destroy them? That is one interesting thing to think about. 


https://phys.org/news/2022-03-physics-ant-rafts-swarming-robots.html


https://interestandinnovation.blogspot.com/

Wednesday, February 23, 2022

Can the planet have a mind of its own?

 

 


There are two ways how the planet can reach consciousness. 


1) The natural way. 


There would form the neural network that covers the entire planet. There is some kind of vision or hypothesis. That another planet can have intelligence. The intelligence would be the vegetables that are hybridized with neurons. 

The question is what this type of intelligence would do? The fact is that this type of creature can make spaceflights if there are some species like monkeys. This means that the neural network of the planet will control those animals. And that kind of entirety can make many things. 

But without those monkey creatures, the neural network would do nothing. It just stays. The reason why evolution has created the ability to think for people. Is that people need that skill. The ability to think is formed from the need to answer. For the challenges that nature is giving. And humans needed thinking the challenge the predators. 

If our hypothetical neural network has no enemies or problems it would not think. Thinking requires some kind of information. And need to handle that information is making thinking. But without the need to develop things our hypothetical neural network is like The Little Prince from the novel. Its entire world is the planet. And that thing means it would not make anything. And the destiny of that species is its sun. Sooner or later that star turns into a red giant and destroys its planets. 


2) The artificial way. In this version, controlled evolution will connect networks and animals to one entirety. 


In this version, the planet will turn to the singularity of technology. And biological components. Theoretically is the possibility that all organisms on the planet would equip with a microchip. That microchip would control the behavior of insects and all other species. The BCI systems allow controlling computers by using EEG. And that thing means that people would start to live in augmented reality. 

In that reality, the sensors or robots are starting to interact with people. But augmented reality can benefit also things like insects. That thing requires that the signals of nervous systems can code and decode. But maybe the control of the insects is possible quite soon by using the EEG-control neural port and microchipped insects. 

Also, those microchips will turn the nervous systems of the entire fauna into the entirety. The thing is that this kind of civilization can control every single organism on its planet by using microchips and computer networks. And maybe we someday face that kind of level. The fact is that the ability to interconnect species would create interesting visions about the augmented reality where everybody can share their thoughts and senses. Or they can take things like ants in their command. Those ants would send their senses to the person who controls them. 




Image 2: (ScitechDaily/Planetary Intelligence: Can a Planet Have a Mind of Its Own?)


But where do we humans are going? 


In the article below this text is described the route from the immature biosphere to the mature technosphere. You might read the description of those stages from the article that is linked below this text. But mainly the steps of intelligence and singularity are in the next list. 


1) Immature biosphere

2) Mature biosphere

3) Immature technosphere

4) Mature technosphere

(ScitechDaily/ Planetary Intelligence: Can a Planet Have a Mind of Its Own?)


Stage 1 was the caveman step. Even the intelligent species would be under force of the nature. The animals are dangerous for all species and that is the beginning of all civilizations. 

Stage 2 Was the level of Romans. They lived in cities, and big animals were not the problem in those cities. Outside city areas things like wolves caused dangers. And things like epidemics caused very big risks. 

Stage 3 is the most dangerous part of the civilization route to Kardashev-Scale. The immature technosphere benefits the pollutive energy production. And also things like a non-controlled environment are causing wars and crises. 

If we think that the Kardashev-scale civilization must pass some test that stage of the immature technosphere is the point where the civilization would destroy itself or turn to planetary and interstellar-scale. But if the civilization reaches the level of mature technosphere it will face a bright future. 


Kardashev-scale


1) The civilization will control one solar system. 


2) Civilization can travel to the closest stars


3) Civilization controls the entire galaxy. 


(Wikipedia/https://en.wikipedia.org/wiki/Kardashev_scale)

 

For reaching Kardashev-scale the civilization must unite. And start to act for the goal that is to conquer the entire solar system. For reaching stages 2 and 3 the civilization must reach stage 1. And that thing is possible only if the civilization will forget its problems. Reaching the Kardashev-scale is necessary because otherwise, civilization will annihilate itself. 

Stage 4 is the mature technosphere. The AI-driven world is where energy will be produced only when people need it. That world base is the information technology that is making it possible to control robots and make discussions over the internet. That thing means the ability to make almost the entire world well-controlled. Where even the animals are interconnected with the internet. 


https://scitechdaily.com/planetary-intelligence-can-a-planet-have-a-mind-of-its-own/


https://en.wikipedia.org/wiki/Kardashev_scale


Image 2:) https://scitechdaily.com/planetary-intelligence-can-a-planet-have-a-mind-of-its-own/


https://thoughtandmachines.blogspot.com/

Thursday, December 16, 2021

Technical systems sometimes look supernatural.

 

 Technical systems sometimes look supernatural. 



Quadcopters can also use gesture control.


One of the things that you must know about things like witchcraft and other types of things like remote-viewing is that those things can base things like hidden eavesdropping tools or informers. So there is somebody next to you who tells everything to somebody else. Another version is that there is some kind of hidden camera or recorder in the room. And those things allow that some "master" can hear everything. 

Technical equipment can use to make people look like some kind of supernatural. Holograms can simply use to create things like ghosts. And extra-ordinary-looking aerial vehicles can use to turn the head of people. The quadcopters can also be equipped with hologram projectors. So those things can make real-looking UFOs:s at the airborne. 

In the same way, the aircraft can have hologram projectors which makes them hunt the UFO. That kind of system might use when some top-secret missiles or other types of secretive technology are tested in some areas. That kind of system can lead the jet fighters away from the secured airspace. Or some flying saucers can be stealth helicopters that are used to collect secretive aircraft away from hostile areas if there are damages in those systems. 

Things like UFO kidnappings called abductions can also make by using virtual reality. Nitrous oxide and some psychoactive chemicals can use to boost the effect of virtual reality. 



A new type of interface can turn techno-witchcraft possible, 


We could almost create "Jedi" by using technical equipment. The quadcopters can control by using gestures. And if those quadcopters have hands or manipulators they can use to operate the computers. The simplest way is to make the manipulators which have cameras. Then the quadcopter needs only to know what the certain letter looks like. The operator could fly that quadcopter to the computer. And simply write the texts by using their computer. 

Or the quadcopter might have a virtual keyboard and the remote controller would connect that thing to the USB gate. Writing the commands can be made by using a remote computer. The mouse can be virtual or the quadcopter can have a manipulator that is acting like a mouse. Or the quadcopter itself is acting like a mouse. 

The system can be on wheels and a manipulator connects it to the USB gate. Then the operator can use a virtual keyboard and move the quadcopter on the table. The operator can see the screen by using the camera that is located in the quadcopter. Or the quadcopter can use the USB- and remote connection for sending the virtual screen to the operator. 

But the system can use also the interface where certain letters can choose by using eye contact. The system is called "Eye Tracker". The prototype of that system allowed the famous physicist Stephen Hawking to communicate with other people even that man had ALS. Today those systems are allowed to buy for gaming and everybody can buy them. 

The computer follows the position of the eye and the selection of the letter happens by blinking the eye. The person sees the activated letter at the HUD screen. That system can use with the portable HUD and VR glasses. 

The same systems are used for controlling virtual characters in games. Can be used to control real physical machines like quadcopters. 

But gesture controls, BCI, and other kinds of systems make that technical equipment more powerful and flexible. The fact is that the controller of the remote systems can use the WEB camera to send those gestures to the remote system. There is the possibility that the user of the system might have certain gloves for confirming those commands. 



Techno-Jedi


When we are thinking about Star Wars and Jedi who have some "supernatural force" there is the possibility that all of those kinds of things are made by using technical equipment. The flying light swords can be quadcopters which are controlled by using the BCI (Brain-Computer Interface). Or actually, there is needed the gesture control system. The simplest model uses data gloves. 

So in that kind of equipment would be only the blowers that are controlling its movements on air. And things like remote-killing can make by using a gesture activator. 

But if the person wants to make the magic tricks without gloves. That thing requires only the jewelry where is a certain type of sensors.  To the wrist is put the motion sensor or sensor that searches the electric actions in the nervous systems. That sensor can also install surgically. 

When the person moves the finger or hands a certain way the system will give certain commands. So the light sword can fly to hand when a certain gesture is made. And the other systems can communicate with portable data handling units that can hide in light sword or mobile telephone. And that kind of system can also use to shut down generators. Or remote activate the jet fighter levitation systems. 

When the person who has a certain dress and gloves makes a certain gesture that thing releases botulinum release in the victim's body. The user could activate the system by using a radio impulse that is sent from the hidden radio transmitter. Or there is a small camera in the clothes or body of a victim. And when a certain gesture is made that camera sends the signal to the botulinum capsule. That kind of thing can use to make a vision of the great witch. 


https://www.lead-innovation.com/english-blog/eye-tracking-as-an-interface


https://eu.mouser.com/images/microsites/quadcopter-capabilities-flight-theme.jpg


https://visionsoftheaiandfuture.blogspot.com/

Wednesday, December 15, 2021

The center of the development of AI should be how that thing can serve people better?

 The center of the development of AI should be how that thing can serve people better?



The problem with artificial intelligence development is that the intrinsic value. That means that creation of the more and more intelligent machines is the prime objective. In that development process, the developing artificial intelligence is intrinsic value. 

The prime objective should be how artificial intelligence can serve humans. How AI might turn life easier and safer?

When we are thinking that AI can fully replace humans that thing is pure imagination. There are lots of things that we don't know about brains. We know maybe how neurons are switching connections and how brains are learning new things. 

But we don't know what kind of role certain things are in certain actions. The things like imagination are totally out of artificial intelligence. Even if we could model that ability to abstract thinking in theory. That thing is hard to make in real life. 

The complicated AI requires powerful computers. And the thing is that AI that runs on the quantum computer can learn things unpredicted fast. Quantum computers are millions of times more powerful than binary computers. 

The self-learning algorithms that are run on quantum platforms can make unpredicted things. And the machine that involves things that are not predicted is always dangerous. 

When we are thinking about the feelings and consciousness of the computer. We must remember that if the machine has feelings. It is dangerous. If the robot would turn conscious that thing makes that thing similar to living organisms. 

And all organisms are defending themselves if they are under threat. The AI might feel it is under threat in this case. Where its server shuts down. The AI itself is not dangerous. But if it's the system that controls things like weapon systems. It can try to destroy the people who are shutting it down. 

Making a real-world computer that has dreams and imagination is a thing that is very hard. The things like quantum computers are shown that theoretically easy things can turn difficult in real life. 

Artificial intelligence can be better than humans in certain limited sectors. AI can play chess better than humans. But humans can make more things than AI. The thing is that humans can do many more things than AI. And making AI that has similar transversal competence as humans is difficult.

There is a possibility that every single neuron in humans 200 billion neurons have different individual programming. So for making the AI that has the same capacity requires 200 billion tables for the database. And maybe that thing requires the 200 billion microprocessors. 

But of course, we could create artificial neurons by using small bottles there is some kind of microchip and quicksilver. The quicksilver will close the electric connections of those bottles. 

In that system, quicksilver is acting as a liquid switch. For making the connection in that system, the magnet will pull quicksilver at connection points of wires. That thing makes the system route data to the right wire. This is the model of an artificial neuron. 

And the microchip involves the database. That kind of system can emulate single neurons. But for emulating humans there is needed 200 billion bottles. 

Humans should be the thing that technology serves. And in the real world in the center of development should be humans. The fact is that. The development of artificial intelligence is different than anything else. Artificial intelligence is an open-source thing. Almost all programming languages are public. And that means everybody can start to make their artificial intelligence projects. 

Artificial intelligence is a powerful tool. Many people are saying that the AI steals jobs of people. The question is: what kind of jobs AI will take? Are those jobs popular? Or do those people who are criticizing the AI. Willing to make those jobs? The question is always about morals and ethics. What if somebody makes the robot for military purposes? 

So ethically that thing is wrong. But also things like nuclear weapons are inhumane. Nobody is stopping to development of nuclear reactors. Because of Plutonium that those reactors are creating can use for nuclear weapons. Every single nuclear reactor in the world is producing Plutonium. But there are no large-scale campaigns on the ethics of nuclear technology. Same way fusion technology can use for weapon research in both plasmoid and fusion explosives. 

But somehow artificial intelligence is a different thing. AI can make human lives better. The only thing that is seen in AI is the military systems that are killing people without mercy. Things like nuclear weapons are not merciless killers. They are inhumane military technology. If some person will get radiation poisoning that thing causes extreme pain and finally slow and certain death. But when inhumane weapons use by human operators it's more acceptable than some kind of robot that shoots enemies by using a machine gun. 

Robots are the thing that can misuse. They can use as riot police and military operators. The thing is that the humans who are serving in those roles are serving governments. The government makes decisions where it wants to use those things. 

But those things can also save humans. They can use as tools for giving medical attention to people. Or they can go in the nuclear reactors in the cases when there is an overheating situation. Robots can research the jungles and volcanoes. Without risking human lives. And robots can travel to other planets. Those trips take years. But for robots, that time doesn't matter. 

So I believe that the first thing that walks at the surface of Mars or icy moons of Jupiter is a robot that is controlled by very independent artificial intelligence. That thing means that. No researcher must spend a lot of the lifetime on that trip. A trip to Jupiter takes 600 days in a flyby mission. 

But if the craft will want to position itself to the orbiter that journey takes 2000 days. That means a one-way trip takes over 5 years. Return to Earth will take 5 or more years. And that means that the minimum time for that mission is 10 years. 

Of course, there should spend some time at the orbiter. If robots would make that mission. The researchers can stay at their homes and make everyday jobs. That doesn't require that human operators should spend 10-20 years away from home. That is one example of how AI can help researchers in extremely difficult missions. 


Image: https://www.salon.com/2021/04/30/why-artificial-intelligence-research-might-be-going-down-a-dead-end/


The noise is a problem for voice commanding robots.

   

 The noise is a problem for voice commanding robots. 




Artificial intelligence and especially in voice commands is one problem. And the name of that problem is noise. When people are giving orders to robots the problem is how to filter those orders which given in purpose from the noise around the area. The selection is important when the orders are given to robots that operate in the middle of people. 

The voice commands what the robot that operates at the station can consist of things that "would you step away" or "help" which means some person needs help. But how to separate the ask for carrying packages from the cases where somebody drops on the railway? And of course, there are many commands that robots should not follow in every case. The thing is that if the robots would get command to hurt somebody. 

That robot should not follow it. But the problem is how the robot separates the orders from the "ducky talk". One version is, of course, following the way that is used in the training of dogs. In that method, there is the gesture. That is given before the command. And that gesture will prepare the robot for command. 

But there is a possibility that the gesture is connected to some mark. Like ring or jewelry that confirms robot that person is allowed to give such orders. There is the possibility that in the wristwatch or ring. Is hidden the QR-code that is telling that the person has certain rights to the system. That kind of system can use when some persons are giving special orders to robots. 

The thing is that if we think. That AI-controlled robot operates on the battlefield. Or at security missions. The robot must separate orders from the noise. There is the possibility. That in those areas robots are disturbed by using loudspeakers. By faked sounds like copied speech or speech images, there is possible that enemy operators are trying to turn robot attacks against their masters. 

The man-shaped robot is one of the things that is a very interesting tool. The robot itself is a multipurpose system. And the AI or operating system determines the things that robots can use. There is the possibility that cleaner or cargo robots have so-called "ghost protocol". That means that when a robot sees things like weapons it activates the protective model. Or if somebody is in danger, The sign that is given to the robot must be clear enough. 

There is an article below this text.  There is introduced how neuroscience is used to filter noise. From the orders which are given in purpose. But the thing is that there are many other ways to make the robot separate the noise or unauthorized commands from the commands that are given in purpose. And by people who have the authorization to give certain types of commands for robots. 


https://www.quantamagazine.org/ai-researchers-fight-noise-by-turning-to-biology-20211207/


Image 1)https://techbullion.com/machine-learning-market-profile-to-triumph-existing-processes-in-nearly-all-business-verticals-manifests-stupendous-growth/


https://thoughtandmachines.blogspot.com/

Sunday, November 7, 2021

How to teach social skills to robots and AI?



One of the versions is to make a large-scale database structure where is different versions of words are used in social situations. Then the programmer makes the first connections between databases. After that, the system starts to talk with its creators. 

The thing is teaching social skills for AI is quite an easy thing in theory. It's just connecting databases. But social skills are more than just saying words. They are noticing the facial expressions taking the hat off when going to the church. And other kinds of things.  

But if we want to make the AI what discusses like a real person that requires a lot of databases and connections between them. If the system would make the job interviews that thing requires lots of data. And if somebody says or asks about things that are not in the database, the system might ask the programmer to answer by talking or writing it to the computer. 

The thing is that if we want that AI to talk with us about everyday situations. That thing is really hard to make. The thing is that we don't normally recognize. That people are not using written standard language in everyday speech. So that requires that the databases can understand dialects. If we want to make an AI that has a very large set of skills. That requires large-scale databases. 

The problem with regular computer programs is that they are not using fuzzy logic. In the discussion programs, the fuzzy logic is made by using multiple databases that are connected with a certain social dialogue. Those databases are involving dialect words. 

They are connected to databases there is written standard language. And then those things are connected to the database that is including social dialogue. The thing is that the programmer can write social dialogue by using dialect words. And that makes the robot more like a human. 

If the social behavior like words that the AI uses are right the programmer or teacher would accept or deny that thing. If there is no match for an answer the programmer would write the right answer for the machine. 

The idea is that when the person is discussing with AI. It would record the things that the opponent is saying, and then answer. This kind of AI might have other parameters like images of the things facial expressions in certain situations. The AI is the ultimate tool in cases like job interviews and especially in video interviews. 

The system can follow things like the length of the brakes between the words. And how the voice is chancing when the person is talking. But it can also look for things like touching the nose during the interview. What kinds of things are marks of lies. That kind of system would also pick things like does the interviewed person has some kind of skills what the interviewer wants to find. 

When the interviewer asks about things like computer skills and what to do in certain situations the job seeker would cover the missing skills in long answers. That means that the interviewer would not notice that there are missing parts in the skills that the computer operator requires. 

 Like how to connect systems to the net. Or something like that. The system might record the answer. And then compare the answer with the actions what the worker should do in those cases. If there is no match, the person would not understand what to make. 

In the cases like job interviews, the AI can search if there are lots of the same names in the lists of referees in different job applications. If the same referees are always in the CV:s of the persons whose work is not. There is something wrong with those referees. 

New self-assembly nanotubes turn the impossible possible.

 New self-assembly nanotubes turn the impossible possible.  "The crystal structure of a carbon bilayer. The purple outer layer and blue...