All posts by Admin

What is Augmented Reality?

Increasingly aware of the central role technology plays in everyone’s lives (including those who are not directly connected), many managers, managers and entrepreneurs are looking for investment alternatives in digital marketing .

In addition to being extremely efficient, digital marketing is inexpensive and its results can be easily measured, giving an accurate idea of   the strategy’s success.

However, this form of marketing goes far beyond search engine optimization (SEO) and offering quality content. With more and more people connecting to the web through smartphones and other mobile devices, you need to think about how to use these technologies to your advantage and your digital strategy .


In this context, augmented reality can be of great help. Consider, for example, the success of the 2016 Pokémon Go game. In addition to appealing to public nostalgia, the game uses augmented reality and geolocation to show monsters on the map and through the camera.
With a week of release, Pokémon Go has valued Nintendo’s shares at $ 7.5 billion, and in less than 5 months the game has reached 100 million downloads between Android and iOS!


What is Augmented Reality?


The first point to be made when asking what augmented reality is is that it is very different from virtual reality. This concerns an immersion environment created through computational tools in which the user performs certain tasks. A good example of virtual reality is The Sims. Augmented reality designates the interaction between virtual environments and the physical world. A good example of augmented reality is QR Code tags in city sights.


Through the reader application of this type of label installed on tablet or smartphone and with internet connection, you can access a kind of virtual guide, which indicates the places to which the tourist should go (including mapping the route to reach the local) and, upon arrival, show the history, curiosities, tour options and whatever else is relevant. Given this, it is not necessary to go very far to imagine why augmented reality is so high in many fields, including marketing and advertising.

How does it work?

Now that you already know what augmented reality is , you probably wonder how it works. Simple: through software, a marker in the physical world and GPS.


In the example given above, the QR tag corresponds to the tag in the physical world, the tag reader app corresponds to the software gateway that provides the information to the user, and the GPS functions as the ‘eye’ of this software, as it indicates the user’s location in the physical world.

Baby robot without a face

There are a huge number of robots in the world, and each of them has its own mission. For example, humanoid robots from Boston Dynamics can be used in construction and for loading heavy loads into cars and ships. But among all of them, there are devices that help older people with daily tasks and prevent them from feeling lonely. Recently, the Japanese company Vstone has created a robot called Hiro-chan, which partially solves exactly this problem. It is made in the form of a baby, capable of expressing emotions. The creators believe that while caring for him, people from nursing homes will feel that someone needs them. But why does the robot have no face and is it capable of reproducing only sounds?


About the unusual robot and the intent of its developers was described in the publication IEEE Spectrum . Outwardly, the novelty looks like a soft toy in the form of a baby, which is capable of reproducing more than a hundred different sounds made by real children. When the robot is left alone, it begins to cry and as if to demand to be picked up and reassured. If you pick him up and hug him, he slowly calms down and starts laughing. The developers believe that if older people care at least about this device, they will get the opportunity to experience positive emotions and feel that they are not alone.


Why doesn’t the robot have a face?

Vstone employees decided to create a robot without a face due to the fact that at the moment no one was able to create a mechanism that can realistically portray emotions using facial expressions. No matter how the engineers try, when you look at the “emotional” robots, a person has a fear of an ominous valley. We have already written about this phenomenon many times. For example, in 2018, we surveyed the humanoid robot Sophia , which is very similar to a real person, but because of the sharp and unnatural movements of her face, it terrifies people.


In order not to scare people, the engineers decided to create a robot that expresses its emotions solely with the help of sounds. Using hundreds of different recordings of cries and laughter of infants, they have achieved that the robot changes its mood as smoothly as possible and does not cause hostility. When a person picks up a robotic baby in his arms, he understands this with the help of accelerometers that report a change in the position of his body relative to the surrounding space. If you hug him and thereby express love, the robot begins to experience joy.

It is worth agreeing that the idea of Vstone seems rather strange. At the moment, we can’t say for sure whether the robot can really save older people from loneliness. But, perhaps, the researchers know something that is not clear to us – after all, it was not in vain that they decided to work on such a strange project?


Testing a Hiro-chan robot is not a big deal. Due to the lack of a complex mechanism for expressing emotions through facial expressions, the creators were able to achieve the lowest possible cost of new items. It is reported that it costs no more than 5.5 thousand yen, which in terms of our money is approximately 3 thousand rubles. Many nursing homes will be able to allow such a robot, but the demand will probably arise only from Japanese institutions. For residents of the United States and especially Russia, robotic technology still seems to be something suspicious, but in Japan they have become something quite familiar. By the way, you can read about the Japanese love for robots in our special material .

CES 2020 -Best Tech Products

CES is the world’s tech gathering place for each one of the individuals who flourish with the matter of consumer technologies.

Round phone

The Circle Phone is the result of this line of thought, a reminder that sometimes, if something works, you don’t need to change it. What at first glance looks like a very large stopwatch is actually an Android mobile; The logic behind this design is that it is easier to take with one hand.

Its creators have been attending CES for five years now, and at least they get to talk about their product. Another thing is that one day it will come true , since there is not even a date for the start of production

The TV that needs a flashlight

Most screens emit their own light, and that is why we can see what they show with the light off … but what if it were not? Without that limitation, Looking Glass engineers have developed a holographic screen, capable of displaying content in three dimensions without the need for special glasses.

The interesting thing is that they have converted one of the disadvantages of this system, which needs an external light source, into a functionality. At CES, they demonstrated how it is possible to use a flashlight to illuminate the screen content; So we can focus on specific areas.

Neon, when you go through marketing

That Samsung had something big on hand for CES was evident. Marketing campaign began several weeks earlier, with the advancement of something called “NEON”. Among whispers, there was talk that the Korean company had managed to create an artificial human.

The reality , as usually happens, disappointed a bit, mainly because Samsung itself does not seem to be very clear about what Neon is. Everything indicates that it is a simple service to create digital avatars, animations of people that do not exist created with Artificial Intelligence.

Yes, when you put it that way it’s awesome; but from there to say that they are “artificial humans” there is a stretch. Especially because it is nothing new .

The robot that brings you the toilet paper

The cleanliness always arouses a terrible creativity among the technology companies, and this year not only has not been an exception, but it gives the feeling that it has had more prominence than normal.

This is the case of Rollbot , a robot presented by Charmin, a brand of toilet paper in the US; As the name implies, the function of this two-wheel companion is to bring us a roll of toilet paper when we run out.

AI mosquito detector

There is nothing worse than trying to fall asleep and suddenly hear the familiar ringing in the ear. Mosquitoes are a big problem that requires drastic solutions; including, apparently, training an Artificial Intelligence to detect its presence.

The Bzigo is a camera system with laser beam capable of identifying mosquitoes in the air; apparently, that is the only thing he can do, since the laser beam is not powerful enough to “fry” the invader.

what is virtual reality

Entering Virtual Reality, which is one of the concepts that entered our lives. It is possible to experience an existing or fictional atmosphere by using mostly visual and audio tools. Both of these tools are being enriched day by day.

What are the Tools for Virtual Reality?

What does it take to feel like we are in this computer-generated or developed design universe? With the new generation of glasses made for this purpose, we now have the opportunity to announce virtual voices and move virtual objects. It seems quite difficult to predict the next move of this growing technology.

Virtual reality glasses have been deceiving people’s senses since 1965. This concept, which was born in a scientific article in America, became increasingly “real.. Now, even if we can’t say it’s completely widespread, it has increased usability in various fields. The virtual reality, which was developed with tiny screens that tried to develop in the military and industrial field, has changed considerably over the years.

Where is Virtual Reality Used or Experienced Now?

With the 2000s, the products of technological developments shrank and rapidly started to affect daily life and ordinary people with the acceleration of developments in display technology. Wearable devices and glasses that hybridize with screens were designed. Many experiments have been successful, which is the dream of scientists around the world.

For example, international giant companies easily experiment with smart glasses. Google, Facebook, Microsoft and Intel are among them. In the light of the university research they support, these giants push the boundaries; The virtual reality glasses produced are also connected to the software is reaching more and more users day by day.

Thanks to hardware and software developers, this technology, which will offer fast experience in more and more areas, is one of the strong factors that will affect our future. Even now, we have gradually started to experience virtual reality through smart screens and glasses in many areas from education, health, automotive to industry.

The defense and entertainment sectors, which are among the exit areas, lead the way. In these areas, innovative applications are created with unique solutions that enrich people in terms of what they can do, make life easier. Today, the most common and easy way to reach the targeted experiences or information is to exceed the limits of what we will do with mobile phones. Mobile access and the Internet fit even more into our ordinary headphones or goggles.

With the ever-evolving mechatronic possibilities and artificial intelligence, science has made such dreams come true. Glasses and similar enhancement accessories designed for virtual reality will soon fit in as a personal consultant, making life easier on every subject.

Science and Art Together with Ease of Access

For example, institutions such as the French Cultural Centers in Turkey from time to time they’re staying contemporary artists using their work or VR major collections of many famous museums. The Mona Lisa painting may not be physically able to play in Turkey; it can be very costly for you to go and see. However, with virtual reality technology you can observe such special works by having a close experience in Izmir or Ankara.

In short, there is no space that virtual reality does not touch. Especially virtual art is perhaps the only point where contemporary art and science are so close to each other. With the atmosphere created, you can explore the planet surface or a museum miles away, or even explore the world of a computer game you love. Moreover, everything will come to your place and come to life and will provide you with this unforgettable experience. This must be the magic of reason and science…

We are surrounded by an increasingly automated and spontaneous science. From the question of whether a driverless traffic is possible to the dream of a self-catering kitchen, it wants to make life easier with human creations. With the ever-evolving technology, you can experience a previously seen environment just like you. You usually enter this world with a spectacle or headgear and control arms, convinced that your mind is seeing what it sees.

Virtual Reality in the Future

The impact of virtual reality in personal applications is expected to improve. Of course, computer games play a major role in the rapid advancement of this technology. Likewise, we will continue to see virtual reality in the game industry with life-like and fantastic worlds. The possibility of films and books and scientific studies turning into three-dimensional article-like materials using virtual reality is breathtaking. On the one hand, travel concepts, extreme sports and hard-to-reach places that appeal to our sense of discovery seem to be going to the hall of our house through this technology. So you will feel the wind at the top with your smartphone.

Nowadays, gaming devices are becoming more and more widespread. It should not be forgotten that this is reflected in the health and education sector competing with entertainment. There is already an evolving use in robotic surgery technologies and simulation training opportunities. In the long term, development is a technology that will branch and snag, making life easier. It will continue to evolve.

How Machine Learning Works

It is easy to look at machine learning as a magical black box, in which you insert data and make predictions. With that, there is nothing magical about machine learning, writes IDG News. In fact, it is important to understand how the different parts of machine learning work, to get better results. So, join us on a tour.

As in many other IT contexts, such as devops, the term “pipeline” is used in machine learning. It is a visual parable of how data flows through a solution. The pipeline can be roughly divided into four parts:

  1. Collect data, called a little funny for “ingesting” (inta) in English.
  2. Prepare data, such as data wash and normalization if needed. Normalization in this context should not be confused with normalization of relational databases, but it is about adapting different value scales to each other.
  3. Model training.
  4. Provide predictions.

Here are more detailed descriptions of the four phases:

Decide on data

Two things are needed to get started with machine learning: data to train a model and algorithms that control training. Data can come from different sources. This is often about data from any business process that is already being collected, either continuously or in archived form.

In some cases, you have to work with streaming data. Then you can choose between managing data streaming or first storing it in a database. In the case of streaming data management, there is another choice between two options: Either you use new data to fine-tune an existing model or you build new models from time to time and train them with new data.

How Machine Learning Works

These decisions affect the choice of algorithms. Some algorithms are suitable for fine-tuning models, others not. In the latter case, you may start with new data.

Data washing is often about scales

There can be a lot of confusion in the data that is taken from a lot of different sources. One thing that often needs to be arranged is to normalize the data, ie to convert different data values ​​to the same scale.

A simple example is that 2.45 meters in high jump can be considered as worthwhile as 8.95 meters in long jump, as both are world records. In order to understand that the values ​​are equally valuable, they need to be converted, normalized, for example to 1.0 in both cases.

But in some cases normalization is not appropriate. It applies whether the scale actually matters. If you want to compare female and male height jumpers, it may be appropriate to normalize so that 2.45 meters for men will have the same value as 2.09 meters for women, as both are world records. But if you want to compare height jumpers regardless of gender then you should not normalize the values.

During the data preparation phase, it is also important to analyze how bias can affect models. This may include, for example, how to select data to use or how to normalize data.

Time for hard training

The next phase is the actual training of a model. It involves using data to generate a model from which predictions can be made. The key activity during training is to make settings, which is called “hyperparameterization” in English.

A hyperparameter is a setting that controls how a model is created based on an algorithm. A very simple example is if you want to divide a number of worlds into categories. In that case, a hyperparameter can be the number of categories you want. One way to arrive at good hyperparameters is to simply try them out. But in some cases, these settings can be optimized automatically.

Sometimes the training can be run in parallel on several processors, which of course provides performance benefits. It doesn’t have to be different processors, but you talk about workers. Workers in this case are simply different copies of a program that runs at the same time in different places.

The parallelization can mainly be done in two different ways: first, different “workers” can work with different parts of a data set, and different “workers” can work with different parts of the model.

Time for delivery

The final phase is to use the pre-trained model, which can be called the “predict and deliver” phase. Now you run the model on new data to generate a prediction. For example, if it is about face recognition, then incoming data is a digital image of a face. Based on training with other images on the faces, the model can now make new predictions. How you handle all the different activities in the different phases, or the different parts of the pipeline, varies. Using cloud services increases the chance of handling multiple parts in the same place, such as training data, pre-trained models, and so on.

In some cases, decisions must be made in cases where the different parts should be handled on servers or client devices. One advantage of running processing on a client, such as a smart mobile, is that accessibility is increased for the user. One potential disadvantage is the poor quality of the prediction, as there are less hardware resources, another poor performance, thus it takes longer to generate a prediction.

Iterative working method

To illustrate the whole flow of machine learning with a pipeline, ie a pipe, is a bit misleading. It is often about iterative work, that is, certain phases are repeated and refined. The type example is that a model is trimmed with new data.

The advantage of thinking of a pipeline with delimited parts is that it becomes easy to focus on the different parts as delimited areas that work in different ways.

A general observation that machine learning is actually as good can be called data analysis, or even math, as AI. What you call machine learning for AI may be because it is a technology that makes it possible to draw conclusions that humans, at least in most cases, cannot.