After the virtual reality , augmented reality and mixed reality comes a new term called parallel reality that promises to make much more personalized experiences in public places. We tell you how it works and why some find it worrisome.
Imagine two people looking at the same screen. The monitor is the same, but each of them sees a different message. They do not wear glasses, or virtual reality helmet, or use an application, how is that possible?
No, it is not a futuristic fantasy.
The answer lies in a technological novelty that promises that certain experiences are much more personalized: parallel reality.
The potential uses of parallel reality from mass events to outdoor advertising are many.
The first citizens to experience it will be travelers visiting the Detroit Metropolitan Airport. The American airline Delta Airlines will test it in mid-2020, as explained by representatives of the airline at CES 2020 , the largest technology fair in the world, held in Las Vegas.
The system will allow you to show almost 100 customers at once unique information about flights, once they have scanned their boarding passes. It will be available in English, Spanish, Japanese, Korean and other languages.
The company has working with the startup Misapplied Sciences, specialized in this type of technology.
what is parallel reality
The concept of parallel reality refers to the possibility of multiple realities or versions existing concurrently. There is a possibility that these parallel realities differ in some small or significant way from our own.
Parallel reality is a concept in technology that refers to creating or simulating additional or alternative worlds or environments users can experience using virtual or augmented reality devices. Parallel reality can be used for gaming, training, or simulations, among other things.
“Parallel reality screens are a new technology with which many people, being shoulder to shoulder, looking at the same screen at the same time, can see different things, without the need to wear glasses,” says the company on its LinkedIn profile.
In this way, “public places such as airports, stadiums, shopping centers and tourist centers can be customized for each person simultaneously “, adds the technology.
In fact, it can be used with thousands of people at a time to read messages in other languages or to receive different information.
The company, based in Redmond, Washington, believes it is an “incredible innovation” and ensures that it can be applied not only on screens, but also on signs and lights.
“It sounds like science fiction, but it already exists,” he says on his website.
Actually, the story began within another larger company: Microsoft.
In January 2014, when during a hackathon (a meeting of programmers) of the company, a researcher named Paul Dietz had the idea of synchronizing a multitude of people in a stadium through a mobile application.
The idea was to “use people as pixels” turning the entire audience into a kind of animated screen, he told the American magazine Fast Company .
He says it worked, but the participants complained that they were so busy looking at their smartphones that they couldn’t enjoy the effect.
So he thought of a better way to develop this product and discovered that he could create different images on the screens depending on the position of each person .
That same year, he founded the company with Albert Ng, who also worked for Microsoft and studied computer science. Dietz would be the president, although he left the company last year.
Meanwhile, Delta was looking for startups to continue with its technological innovations, which began mostly with the use of biometric systems.
But how does this technology work?
The explanation of using people as pixels may be strange, but in reality it is simpler than it seems.
The basic principle is that projecting different colors to different directions allows differentiated messages to be reflected . The most important thing is to control where each beam of light goes.
Thus, a single pixel can emit green light towards you, and red light towards the person next to you, its creators explain.
“It consists of displaying pixels that are capable of simultaneously releasing light rays of different colors in many directions at once, ” said Dave Thompson, an employee of Delta Airlines, at CES 2020.
On a conventional screen, we all see the same; In a parallel reality screen, pixel beams and brightness allow different messages to be transmitted.
Misapplied Sciences has designed screens that can be configured specifically according to each message.
We do not know its price, although there is no doubt that it will be an expensive technology.
In the case of the airline, the system would work thanks to artificial intelligence software and cameras that are capable of recognizing up to 100 individuals.
He doesn’t use facial recognition, but arouses some suspicions among analysts that have to do with privacy.
And the software will follow your movements through the airport and communicate your location in real time.
Delta’s CEO said the company will use this innovation only internally to “improve the experience of its customers . “
A phrase that we are already accustomed to hearing and that is repeated every time a technological innovation threatens our data to be increasingly exposed.