Re: LUXULY HI-NEWS

11
Neuron reluctantly: the neural network created an image that directly affects the brain
See this image above? With this strange image, MIT neuroscientists were able to activate individual neurons in the brain. Using the best available model of the visual neural network of the brain, scientists have developed a new way to precisely control individual neurons and their populations in the middle of this network. During animal testing, the team showed that the information obtained from the computational model allowed them to create images that strongly activated certain brain neurons.
Scientists, in fact, got a way to access the brain through the image "directly", bypassing the long way of understanding images. But before you think about the grim future, in which we will really be zombied from the TV screens, let's do everything in order.
This is definitely a breakthrough.
The main conclusions of the work are the existing calculated versions of the models of the visual neural systems are sufficiently similar to the present ones so that they can be used to monitor the state of the brain in animals. How accurately these models mimic the work of the visual cortex - this issue is very heated debate, says James DiCarlo, head of the department of brain and cognitive sciences MIT, the senior author of the study, which appeared on May 2 in the journal Science.
“People have long wondered whether these models provide an understanding of the visual system,” he says. “Instead of discussing it in academic circles, we have shown that these models are already powerful enough to be used in new and important ways. Regardless of whether you understand how this model works or not, in a sense, it already benefits. ”
That is, it does not matter how the computational model of the visual system of the brain works - it is important that we can already use it, that it is sufficiently accurate and that new experiments can be developed on its basis. This is the first consequence of the work that needs to be taken into account.
Control of neurons through images - this is possible
Over the past few years, DiCarlo and others have developed models of the visual system based on artificial neural networks. Each network starts with an arbitrary architecture consisting of model neurons, or nodes that can be connected to each other by different strength indicators, or “weight”.
Then scientists teach these models in a library of over 1 million images. Looking through each image and label of the most important object in the image - an airplane or chair, for example - the model learns to recognize objects by changing the strength of the connections. It is difficult to determine exactly how the model achieves this kind of recognition, but DiCarlo and his colleagues previously showed that the “neurons” in these models create activity patterns that are very similar to those that are observed in the visual cortex of animals when reacting to the same images. That is, the neural network seems to be trying to learn to think or see for real.
In the new study, scientists wanted to check whether their models could perform some tasks that had not previously been demonstrated. In particular, they wondered whether it was possible to use these models to control neural activity in the visual cortex of animals.
“So far, we have tried to predict using these models what the neural responses to other stimuli they haven’t seen before,” the scientist says. "The main difference here is that we go one step further and use models to bring neurons into desired states."
To achieve this, scientists first created an accurate "one to one" map of brain neurons in the visual region of the V4 brain from nodes in the computational model. They did this by showing images to animals and models and comparing their responses to the same pictures. In the V4 region there are millions of neurons, but for this study, maps of subpopulations with 5-40 neurons were created simultaneously.
“As soon as each neuron receives an assignment, the model allows you to make predictions about this neuron,” says DiCarlo.The scientists then decided to find out if they could use these predictions to control the activity of individual neurons in the visual cortex. The first type of control, which they called “stretching,” includes displaying an image that will bring the activity of a particular neuron far beyond the limits of activity usually caused by “natural” images, such as those used to train neural networks.
The researchers found that when animals demonstrated such “synthetic” images that were created by models and did not resemble natural objects, the target neurons reacted as expected. On average, neurons exhibited about 40 percent more activity in response to these images than when they were shown natural images. No one has ever achieved this kind of control before.
“The fact that they managed to do this is amazing. From the point of view of a neuron, its ideal image appears to be in its focus. The neuron is suddenly given the stimulus it has always been looking for, ”says Aaron Batista, an associate professor of bioengineering at the University of Pittsburgh, who did not participate in the study. “This is a great idea, and implementing it is a real feat. Perhaps this is the most powerful proof of the need to use artificial neural networks to understand real neural networks. ”
Just think: scientists have created a simple (so far) image generator, causing a certain effect in the brain of an animal (so far). In theory, so far only in theory, it would be possible to create a “perfect” image for controlling hormone emissions, creating specific memories, programming human actions, because all of this is the result of neurons. The picture created by the neural network, which no one has ever seen, and which only a neural network that understands the internal workings of the brain can only cure and kill.
In a similar series of experiments, scientists tried to create images that would “maximize” the neuron from itself, while maintaining activity in neighboring neurons at a very low level, which is more difficult. With the majority of neurons tested, scientists were able to increase the activity of the target neuron with a slight increase in the surrounding neurons.
“The general trend in neurobiology is that experimental data collection and computer modeling are performed slightly separately, which makes it impossible to significantly confirm the model, therefore there is no measurable progress. Our efforts are bringing back a “closed loop” approach, scientists say. This is important for the success of building and testing models that are most like the brain.
Measurement accuracy
Scientists have also shown that they can use their model to predict how neurons from the V4 region will react to the synthesized images - like the one above. Most of the previous model tests used the same type of naturalistic images on which the model was trained. Scientists from MIT have found that models with an accuracy of 54% predict how the brain should respond to the synthesized images, and with an accuracy of 90% predict how the brain will react to natural images.
“In a sense, we quantify how accurate these models are when forecasting outside the area in which they were trained,” says one of the researchers. “Ideally, the model should be able to accurately predict the response, regardless of the input signal.”
Now scientists are hoping to improve the accuracy of models, allowing them to include new information that they comprehend by looking at the synthesized images. This study did not apply. Simply put, models will learn from their own generated images.
This kind of control will be useful for neuroscientists who want to study how different neurons communicate and interact with each other. In the future, this approach will potentially be helpful in alleviating mood problems, such as depression.Now scientists are working on expanding their model to the lower temporal (inferotemporal) cortex, which feeds on the amygdala, which is involved in the processing of emotions.
“If we had a good model of neurons that trigger a surge of emotion or cause different types of disorders, we could use this model to control neurons in a way that helps alleviate these disorders.
Attachments


Re: LUXULY HI-NEWS

12
Lockheed Martin and Rolls-Royce will create the first 100-kilowatt combat laser
The US Army has completed a tender for the creation of the first combat laser, whose power will be 100 kilowatts. The winners were the American companies Dynetics and Lockheed Martin, which will attract third-party organizations - in particular, the automotive giant Rolls-Royce. The competition has been held since mid-2016.
It is planned that defense enterprises will create a tactical machine with a high-energy laser based on an FMTV army truck family. The laser will be placed on the roof, due to which such machines will be able to shoot down missiles, drones, helicopters and even small planes. Now these trucks are used mainly for the transport of ammunition.
To integrate the laser, the machines will be significantly re-equipped: it is planned to install a hybrid power plant, which was recently shown by Rolls-Royce. It has a power of 300 kilowatts and can power lasers with a power of up to 100 kilowatts. Rays from several laser emitters will be transmitted via an optical fiber to a special aggregator, which at the output will give one high-power laser beam.
Lockheed Martin plans to carry out the first tests in 2-3 years, and in the US army they want to get new trucks into service in 5 years. Considering that the spectral combination is taken as the basis, the realization in such a short time is quite real.
Moreover, the company has the necessary experience: for example, in 2018 it patented a compact synthesis reactor. The reactor can be powerful enough to work with an aircraft carrier, an aircraft the size of the C-5 Galaxy, provide electricity to a city with a population of 50 to 100,000 people, and perhaps even send us on a trip to Mars.
In parallel, China is testing its laser combat system, which plans to use it for air defense. Despite the fact that it has less power, it can hit air targets that are no more than 5 kilometers away.
Attachments

Re: LUXULY HI-NEWS

13
Scientists have proven that augmented reality changes people's behavior
A new study by experts from the Stanford School of Humanities and Natural Sciences proves that the experience of augmented reality (augmented reality, AR) significantly changes a person’s behavior in the real world even after he stops using AR-headset. Three experiments with the participation of 218 volunteers helped the group of scientists led by Professor Jeremy Bylenson to clarify this fact. The results of the research specialists were published by the journal PLOS ONE. The press release of the study is published on the website of Stanford University.
During the first experiment, the participants were shown a realistic 3D model of a person named Chris sitting on a chair in the room (the picture was created using augmented reality technology, that is, overlaying a virtual image layer on the image of the real physical world). Volunteers were asked to complete an anagram assignments while Chris watched them watch. Scientists have discovered that people, like the presence of a real person, literally felt the presence of Chris, which in turn affected the speed of anagrams. Participants noted that having a person in augmented reality, who followed them, made solving the problem more difficult.
During the second experiment, the scientists decided to check whether the participants could sit on the chair where Chris had previously sat. It turned out that despite the absence of a virtual person in the chair, none of the participants who used the AR-headset at that moment decided to sit on this chair. Even when people removed the headset, 72 percent of them did not dare to take Chris's chair and sat down on the side.
“The fact that none of the participants who used the AR-headset sat on the chair on which the virtual avatar had previously sat was a surprise to us,” comments Bailenson.
“These results indicate how deeply the content of augmented reality is able to integrate into your physical space, changing your behavior and relationship with it. Interestingly, the feeling of having AR content was observed even after the participants removed their headsets. ”
In the last experiment, scientists have combined into pairs of people who were in the AR-headset and without it. Communication took place between them, after which people with headsets noted that they felt less connected with their interlocutor.
“We found that using augmented reality technology can change your behavior: how you walk, how you turn your head, how you perform various tasks, and how you communicate in the social sphere with other real people in the room”, - sums up Bailenson.
Attachments

Re: LUXULY HI-NEWS

14
No more punctures: Michelin and GM promise to release airless tires by 2024
Driving a car is always accompanied by many risks. Fortunately, most trips take place, as a rule, without any problems, but there are times when the road really has no luck. One of these real headaches are punctured tires. But thanks to the companies Michelin and General Motors (GM), which announced a joint effort to improve the technology of airless tires, this problem may soon become a relic of the past.
After years of research and development, the French company Michelin, one of the leaders in the tire industry, introduced a new generation of airless tires, called UPTIS (Unique Puncture-proof Tire System, a unique puncture-resistant tire system). Airless (or non-pneumatic) tires do not use sealed chambers into which air is pumped.
The UPTIS design consists of an outer tread and an inner rim, between which the soft “spokes” of a mixture of composite rubber and polymeric fiberglass are stretched. Their use allows you to operate tires at a sufficiently high speed.
According to the Digital Trends portal, UPTIS is an advanced version of Tweel airless tires, which the company introduced almost 15 years ago, and now uses it for skid steer loaders. The first variants of such tires had one major drawback - they were louder than regular tires, but they solved the problem with time.
This year, Michelin is going to start UPTIS tests on Chevrolet Bolt EV electric vehicles at GM's test site in Milford (Michigan, USA). According to the company, such tires will reduce the unsprung weight and increase the range without recharging.
If the tests are successful, then by 2024 GM will begin to offer airless tires for some models of its range of cars as an option.
What are the benefits of airless tires?
Airless tires provide several advantages over traditional solutions. First of all, these tires are not afraid of punctures, cuts and other damage, because they do not have a sealed chamber. For the same reason, airless tires do not lose their performance due to insufficient or excessive pressure. All this in turn will reduce the cost of production and disposal of tires, and the volume of waste in their production. Michelin estimates the savings to be 2 million tires per year, as fewer tires will have to be disposed of before the end of their operational period.
Perhaps, thanks to such tires, there will be no need for a spare wheel at all, which will generally make cars easier. In addition, airless tires can have a positive effect for businesses operating with a large fleet of vehicles. They do not have to spend time on replacing punctured tires, which generally reduces maintenance time, having a positive effect on the business.
Attachments

Re: LUXULY HI-NEWS

15
The neural network was taught to “animate” portraits based on just one static image.
Russian specialists from the Samsung AI Center-Moscow Artificial Intelligence Center, in collaboration with engineers from the Skolkovo Institute of Science and Technology, developed a system capable of creating realistic animated images of people's faces based on just a few static human frames. Usually, in this case, the use of large databases of images is required, but in the example presented by the developers, the system was trained to create an animated image of a person’s face from just eight static frames, and in some cases one was enough.
As a rule, it is rather difficult to reproduce a photorealistic personalized module of a human face due to the high photometric, geometric and kinematic complexity of the reproduction of a human head. This is explained not only by the complexity of modeling the face as a whole (there are a large number of modeling approaches for this), but also by the complexity of modeling certain features: oral cavity, hair, and so on. The second complicating factor is our predisposition to catch even minor flaws in the finished model of human heads. This low tolerance for modeling errors explains the current prevalence of non-photorealistic avatars used in newsgroups.
According to the authors, the system, called Fewshot learning, is capable of creating very realistic models of talking heads of people and even portrait pictures. The algorithms produce a synthesis of the image of the head of the same person with the lines of the landmark face, taken from another video fragment, or using landmarks of the face of another person. Developers used an extensive celebrity video database as a source of training material for the system. To get the most accurate “talking head”, the system needs to use more than 32 images.
To create more realistic animated face images, developers used previous developments in generative-competitive modeling (GAN, where the neural network thinks about the details of the image, actually becoming an artist), as well as the machine meta-learning approach, where each element of the system is trained and designed to solve specific task.
For processing static images of people's heads and turning them into animated, three neural networks were used: Embedder (implementation network), Generator (generation network) and Discriminator (discriminator network). The first one separates the images of the head (with approximate facial landmarks) into embedding vectors that contain posture-independent information, the second network uses the facial orientations obtained by the network and generates new data based on them through a set of convolutional layers that provide stability to changes in scale turns, change of angle and other distortions of the original image of the face. A network discriminator is used to assess the quality and authenticity of the two other networks. As a result, the system turns the landmarks of a person’s face into realistic-looking personalized photos.
The developers emphasize that their system is able to initialize the parameters of both the generator network and the discriminator network individually for each person in the picture, so the learning process can be based on just a few images, which increases its speed, despite the need to select tens of millions of parameters.
Attachments


Re: LUXULY HI-NEWS

16
The planes of the future will be virtual reality, yoga studios and cauliflower
In 2018, Qantas introduced the first uninterrupted communication between Europe and Australia. A 17-hour long flight like this is only possible because aircraft manufacturers such as Boeing and Airbus have spent decades on innovations, including wing shapes and more durable and lightweight materials. But keeping the plane in the air all day is useless if you can’t entertain people in tight and boring cabins. Therefore, the plans include the inclusion of wider seats, lamps, which reprogram circadian rhythms, and the air in the cabin, which is more pleasant to breathe.
Also plans to speed up Wi-Fi. But it’s better to hurry: Qantas plans to launch 22-hour flights as early as 2022.
Why are the seats in the airplanes so narrow?
The most precious centimeters on the plane are those on which we sit. Air travel economics cause most passengers to squeeze into places that are no more than 42 centimeters in diameter. But for long trips you may have to return wider places. Studies of the London Sleep Center show that increasing the width of even a few centimeters improves the quality of passenger sleep by 53%. Also, airlines are looking for ways to provide more leg space. One of the options: upholste the seat backs with a thinner material.
Why are airplanes so tasteless?
Qantas worked with nutritionists to develop an anti-malaria menu for 17-hour walks from Perth to London. In food, special attention is paid to fresh ingredients with a high water content, such as cucumbers, strawberries, celery and leafy greens, so that passengers are not too dehydrated. It helps to reduce fatigue and headaches.
Singapore Airlines also rewrite the menu for its 18-hour flights to New York. Foods low in salt retain visitors' hydration, and also contain a minimum of carbohydrates, preventing an increase in blood sugar. The company is currently considering replacing potatoes with cauliflower.
Why do airplanes lack air?
Have you ever wondered why you feel tired after a long flight? Perhaps this is due to the fact that most aircraft cabins recreate the atmosphere of the Earth at an altitude of 2000 meters, where rarefied air makes our hearts and lungs work very hard to supply the body with oxygen. Increasing air pressure in most airplanes will lead to a load on weak points such as windows and doors, but Boeing has strengthened its 777X's fuselage to hold a height of 3000 meters in the cockpit. This also means increased humidity, which should alleviate the problem of dry eyes and itching nose during transcontinental flights.
Take off and off
Perhaps the easiest way to improve a long flight is to help passengers forget that they are flying at all. Unfortunately, Wi-Fi in flight reaches a speed of 15 Mbit / s, not more. Airbus wants to improve this situation by connecting improved communications satellites and data stations. But even if the on-board Internet will never support streaming, future passengers will be able to enjoy excellent entertainment options. Airbus is experimenting with OLED displays and virtual reality headsets.
Awakening in the middle of the night
Our bodies set their circadian clocks in accordance with the cycles of light and darkness. Many airlines are already using the strategy of full-color LED lighting on long-haul flights to prevent passengers from falling asleep - bright colors and light imitate daylight, and warm pink colors help the body to fall asleep. Airbus has developed 17 million light and color combinations that go beyond imitating day and night. These light therapy methods mimic various external conditions and can help synchronize passengers' internal clocks with the time zone of destination.
Sleep at work
Long routes require several groups of crews working in shifts.Pilots on vacation sleep right under the cabin or above the business class. Attendants usually sleep in bunkers hidden above the last rows in economy class. But there is little space, so no one really gets enough sleep. Both Boeing and Airbus are upgrading: wider and longer beds with increased upper space, covered with thick, noise-absorbing curtains.
Comfort zones
Most airplanes hide additional cargo compartments in their peritoneum. But few airlines actually fill the entire cargo space. The Airbus A330 is as much as 55 square meters - the size of a large apartment. This place can be converted into a hospital or a business center, a game room, a chic lounge, a yoga studio or a gym - all for travelers.
Attachments

Re: LUXULY HI-NEWS

17
Goodbye, Big Bang. Hello, black hole?
Could the famous big bang theory need to be revised? A group of theoretical physicists suggests that the birth of the universe could have occurred as a result of a four-dimensional star collapsing into a black hole and throwing its debris.
Before plunging into work, let's agree: no one knows for sure. People were definitely not where it all began. Standard theory says that the universe came from an infinitely dense point or singularity, but who knows what it was before the Big Bang?
“To the attention of physicists, dragons could fly out of a singularity,” said Nyaesh Efshord, an astrophysicist at the Perimeter Institute for Theoretical Physics in Canada, co-author of the new study.
What are the problems of the Big Bang theory? First, the singularity, of course. Secondly, it is difficult to say why the universe appeared as a result, which has almost the same temperature throughout its length, because the age of the universe (about 13.8 billion years) is not enough, as far as we can judge, to achieve temperature equilibrium.
Most cosmologists say that the universe had to expand faster than the speed of light for this to be possible, but Ashford says that this theory also has problems: “The big bang was so chaotic that it was not clear whether there was at least a minimal message to for inflation to start working. ”
That's what physicists offer.
The model they built represents a three-dimensional universe floating in a membrane (or brane) in a superverse, which has four dimensions. Yes, from such a concept, your head can get sick, so it’s better to think of a brane as a two-dimensional space, and of a superverse, as a three-dimensional one.
Thus, if a superuniverse carries four-dimensional stars in itself, these stars can go through the same life cycle as three-dimensional ones known to us. The most massive of them will explode like supernovae, losing their flesh and collapsing into themselves in the form of black holes.
A 4D black hole would have an “event horizon,” like 3D, already known to us. The event horizon would be the boundary between the inside and outside of the black hole. There are a lot of theories about what happens inside a black hole, although none of this has ever been observed.
In a 3D universe, the event horizon would be displayed as a two-dimensional surface. Thus, in a 4D universe, the event horizon would be a three-dimensional object called the hypersphere.
In short, it follows from the model that when 4D stars explode, the remaining material can create 3D branes that surround 3D event horizons and then expand. That's how it was.
Attachments

Re: LUXULY HI-NEWS

18
A black hole in the center of our galaxy can be a wormhole
Of all the astounding cosmic phenomena, black holes and wormholes (mole holes) attract the most attention. Perhaps this is because they are too far from the natural world as we know it. But despite all the theories related to these objects, they are full of mysteries.
In the center of the Milky Way there is a very compact source, emitting a huge amount of radio waves. Scientists say that this strange phenomenon was the result of the actions of a supermassive black hole, manifested in the central region of our galaxy. For many years of painstaking observation of outbreaks, astronomers have concluded that the object Sagittarius A * is a black hole with a mass of more than 4 million suns. Moreover, almost all if not all galaxies have black holes in the center. And we still have little understanding of how they grow and develop - that is, with such a disproportionately high speed.
Take Sagittarius A *, for example. It concentrates a mass of more than 4 million suns, but still it is a small pea compared to the giant ultramassive black holes, the mass of which can reach a billion suns. Some of them are so large that it is impossible even to determine the upper limit of the size of black holes. It is quite natural to assume that the development of such black holes takes time. But no, it seems that black holes grow literally “per day” in cosmic equivalent. Bring the size of individual supermassive black holes in line with the model of galactic evolution, scientists could only with an unknown variable, which determines their growth.
Not so long ago, two researchers - Zhilong Li and Cosimo Bambi (both from Fudan University in Shanghai) - published a paper in which they assume that Sagittarius A * is not a black hole, but a wormhole. Why is that?
Wormholes themselves are overgrown with many myths. Some people think that they serve as portals in time, others just know that wormholes can theoretically shorten the path between two remote points in space. Lee and Bambi work with an ordinary wormhole, which resembles a traditional black hole in its structure.
We quote scientists:
“Despite the exotic nature, at least several types of primary wormholes can become viable candidates for explaining the presence of supermassive objects in the centers of galaxies. These objects do not have a solid surface, so they can mimic the presence of the event horizon. They were born in a young universe and grew up during inflation, so they can explain their presence even with a very large redshift. ”
“Very large redshift” refers to galaxies that have a large redshift due to the Doppler effect. Such galaxies are very old and distant, probably formed in the first epochs after the Big Bang. The expansion of the universe shifted them far from the line of sight, so their frequency shifted to the red segment of the electromagnetic spectrum. The same effect, only in the opposite part of the spectrum, we used to determine the approach of the Andromeda galaxy to the Milky Way galaxy. The light of distant galaxies passed billions of light years before reaching Earth. But since the speed of light travel is limited, we see these old galaxies at the very beginning of their life. We also know that in the first galaxies formed after the Big Bang, black holes also nest in the central parts.
And here the wormhole alternative comes into play. The type of wormholes with the properties of black holes could be created only in one case: in the process of the Big Bang, and each of them was born simultaneously.
In fact, the type of wormhole that can imitate the properties of a black hole could be formed only during the Big Bang, and each of them had a mass of millions of suns, which might explain why the very first galaxies have supermassive black holes in their nuclei. It's just not black holes. These are wormholes whose properties are very similar to black holes.How to check it?
Scientists believe that the truth lies in the "fingerprint" of a stellar object, which we will be able to see with the release of a new instrument that will be integrated into one of the most powerful telescopes in the world. Scientists have invented an ingenious way to test their hypothesis: consider the unique glow of any such wormhole, especially Sagittarius A *.
More specifically, black holes and wormholes emit spectrally different “drops” of plasma, which can be easily determined by GRAVITY: the aforementioned tool that will soon be installed on the Very Large Telescope. Not only will the signatures be different, but the radiation rays from the two objects will differ in size. Our hypothetical wormholes will have a “very narrow radiation line,” while the black hole spectrum will be “broad and distorting the effects of special and general relativism.”
It goes without saying that GRAVITY still needs to stand up. But perhaps we will find the first proof of the existence of wormholes.
Attachments

Re: LUXULY HI-NEWS

19
The new interface will connect the brain with the cloud already in this century
Imagine the technology of the future, which will provide instant access to world knowledge and artificial intelligence, literally at will, if you think about something specific. Communication, education, work and the world, as we know it - everything will change. An international group of scientists led by the University of California at Berkeley and the United States Institute for Molecular Production published a paper in Frontiers in Neuroscience, in which she spoke about the appearance of amazing things at the junction of nanotechnology, nanomedicine, AI and computing.
According to scientists, already in this century, a “neuro-cloud interface” (BCI) will be developed, connecting brain cells with extensive cloud computing networks in real time.
Brain nanorobots
The concept of B / CI was originally proposed by the inventor, writer and futurologist Ray Kurzweil - he suggested that neural nanorobots (the brainchild of Robert Freitas, Jr., the senior author of the study) can be used to connect the neocortex of the human brain with the "synthetic neocortex" in the cloud. Our wrinkled neocortex, or simply the “new bark”, is the freshest, smartest, and “conscious” part of the brain.
Freitas suggested that neural nanorobots will provide direct monitoring and control of signals coming in real time from and to brain cells.
“These devices could move through a person’s vascular network, cross the blood-brain barrier and move precisely between or even in brain cells,” explains Freitas. "Then they will wirelessly transfer the encoded information to the cloud-based supercomputer network and receive it from there to monitor and extract brain data in real time."
Internet thoughts
This cortex in the cloud will allow information to be loaded in the brain as in the Matrix, scientists say.
“The human B / CI system, mediated by neural nanorobotics, will be able to provide people with instant access to all the accumulated human knowledge available in the cloud, while significantly improving human learning and intelligence,” says lead author Nuno Martins.
Such technology will allow us in the future to create a “global superbrain” that could connect networks of individual human brains and AI, providing collective thinking.
“A simple experimental version of the BrainNet system has already been tested and allowed to exchange information between individual brains through the cloud,” explains Martins. “Electric signals and magnetic stimulation were used, which allowed the“ recipient ”and“ sender ”to perform joint tasks.”
“At the same time, how neuronanobotics evolves, we foresee in the future the creation of“ superbrains ”that can use the thoughts and thinking power of any number of people and machines in real time. This common “consciousness” can change the approach to democracy, strengthen empathy and ultimately unite culturally different groups into a truly global society. ”
When can we connect?
The group estimates that even existing supercomputers have the computational speed required to process the volumes of neural data in B / CI - and they are becoming faster.
Most likely, the transfer of neural data to supercomputers is likely to become the bottleneck in the development of B / CI.
“This task involves not only finding bandwidth for global data transfer,” Martins warns, “but also how to enable data exchange with neurons through tiny devices embedded deep in the brain.”
One of the solutions proposed by the authors is the use of “magnetoelectric nanoparticles” to effectively enhance the connection between neurons and the cloud.
“These nanoparticles have already been used in living mice to connect external magnetic fields with neural electric fields — that is, to detect and locally amplify these magnetic signals and, thus, allow them to change the electrical activity of neurons,” explains Martins."It works in the opposite direction: the electrical signals produced by neurons and nanorobots can be amplified through the magnetoelectric nanoparticles so that they can be detected outside the skull."
To drive these nanoparticles - and nanorobots - safely into the brain through the blood circulation will be the biggest challenge for everyone in the field of neuro-cloud interfaces. Careful analysis of the biodistribution and biocompatibility of nanoparticles will be required before they can be considered for human development. And yet, Martins is confident that the “Internet of thoughts” will become a reality by the end of the century.
Attachments

Re: LUXULY HI-NEWS

20
Old age in the head: until what age does the brain produce new neurons?
A group of scientists from several institutes in Spain found evidence of neurogenesis (the emergence of new neurons) in the brain of people, down to extreme old age. In their article, published in the journal Nature Medicine, the group describes brain studies of recently deceased people and their findings. Scientists have been arguing over the age at which the brain produces new neurons over the past few years, as well as in which parts of the brain this happens.
Many studies in this area were focused on the hippocampus, because this part of the brain is most involved in the storage of memories - logic dictates that new memories cannot be dispensed with new neurons, they will need to be stored somewhere. In addition, the hippocampus is one of the brain structures that is damaged due to memory-plundering diseases like Alzheimer's. Last year, an international team of scientists concluded that neurogenesis ceased in the hippocampus after childhood.
In their new work, scientists report that they have proved the opposite: in fact, neurogenesis continues until extreme old age.
Scientists: Neurogenesis lasts up to 87 years
Previous studies have shown that in the early stages of development, neurons contain doublecortin protein, which can be seen under a microscope. Studies in Spain relied heavily on this information. Scientists studied the corpses of recently deceased people (for 10 after death) and examined them under a microscope for signs of doublecortin.
Scientists report that they have found numerous examples of doublecortin cells, which showed that the growth of new neurons occurred in the brain of people who died between the ages of 43 and 87 years. Remarkably, the same tests were conducted on people who had Alzheimer's disease, and there were very few examples of neurogenesis. This suggests that the disease not only deprives people of old memories, but also prevents them from forming new ones.
The researchers also note that they used a more rigorous approach to preserving corpses than in previous papers, and this could explain the difference in the results. However, while there is no definitive answer to the question of how old the brain is able to form new neurons, this is only the first step to clarification.
Attachments


Who is online

Users browsing this forum: SEMrush [Bot] and 23 guests