Tag Archives: artificial intelligence

Will artificial intelligence be responsible for saving whales?

In the case of killer whales, which are so difficult to track that the International Union for the Conservation of Nature considers that they do not have sufficient data to determine their conservation status, efforts are focused for the moment on the population of the sea of Salish, where it has been possible to record a decline of hundreds of copies to only 73 today.

Knowing the real-time location of whales in the sea , which could be key to protecting them from the traffic of ships and oil spills, has ceased to be a distant desire of conservation groups to become a reality thanks to the use of techniques of artificial intelligence.

Knowing where a group of cetaceans is at all times allows orders to be given to fishing vessels to avoid that area – thus preventing collisions and accidental catches -, center cleaning tasks after a black tide in the areas of most need or create protected areas in the spaces where they spend more time.

But, if it is already difficult to track animals on land, how is it possible to do so in the immensity of the oceans, with species that travel thousands of kilometers and that are extremely difficult even to censor?

The answer can be found in artificial intelligence.

A collaborative project between the NGO Rainforest Connection (which already uses artificial intelligence to fight deforestation in the Amazon), the Department of Fisheries and Oceans of Canada and Google has proposed to “monitor” the threatened population of killer whales in the Salish Sea, which bathes the shores of metropolis like Seattle and Vancouver.

“The logic is the same that we apply in the case of the Amazon. We use acoustic signals to locate the whales and transfer that information to the relevant authorities so that they can act accordingly,” explained the founder of Rainforest Connection, Topher White.

The acoustic signals are recorded at the bottom of the ocean by devices called hydrophones that are held by cables and have network infrastructure to send audios in real time to a server, from which they are transmitted to an artificial intelligence system.

Among the infinite hours and hours of audio, the artificial intelligence model is designed to detect whale songs, which, given the speed and long distance at which sound travels underwater, can be captured up to 50 kilometers away.

“Our job in this project is to teach artificial intelligence systems to detect, among all the variety of sounds captured in the ocean, specific species such as humpback whales or killer whales,” said Google AI product manager Julie Cattiau.

In the case of killer whales, which are so difficult to track that the International Union for the Conservation of Nature considers that they do not have enough data to determine their conservation status, efforts are focused for the moment on the population of the sea of Salish, where it has been possible to record a decline of hundreds of copies to only 73 today.

Responsible for capturing the sounds through a dozen hydrophones distributed in relatively shallow water areas is the Department of Fisheries and Oceans of Canada, which, just to teach artificial intelligence models, already provided programmers with 1,800 hours of audio submarine and 68,000 “tags” that identified the different sounds.

“It is clear that we will not be able to cover the entire ocean with these devices, but you can choose specific places that are important. For example, navigation routes are generally very well defined, and are of course a good place to go. the one to start, “said Matt Harvey, software engineer at Google AI.

With regard to humpback whales, the other species for which artificial intelligence models have been taught for the time being, monitoring is concentrated in the Hawaiian archipelago, and these represent an additional challenge, since the sounds that they emit are complex songs, with a great variety of vocalizations that even change over time.

Thus, perhaps unexpectedly for the general public, artificial intelligence has emerged as a useful tool to respond in the 21st century to which for decades it has been one of the most publicized challenges of the conservation movement: save the whales.

Artificial Intelligence First Invented a drug to cure humans

Artificial intelligence was first utilized in the improvement of medications. Along these lines, a wise machine developed an atom that is being set up for clinical preliminaries in people. The new medication was made during the coordinated effort of the British startup Exscientia and the Japanese pharmaceutical organization Sumitomo Dainippon Pharma. As per the BBC , the new prescription is proposed for patients who experience the ill effects of over the top enthusiastic issue. It as a rule takes five years of improvement before new equations permit testing. In any case, the medication, created by Artificial intelligence , was sufficient for a year.

Exscientia Executive Director Professor Andrew Hopkins called it “a key achievement in created prescription.” “Artificial intelligence has just analyzed patients, dissected their information and output results. Be that as it may, this is the primary utilization of Artificial intelligence in the production of new meds, “he clarified. The new particle is called DSP-1181. It was prevailing to make because of the way that the calculation prepared every potential association, checking them for consistence with a tremendous database of parameters. “It takes billions of choices to locate the correct atoms.” And exact medication improvement is a difficult task, “said Hopkins.

“Be that as it may, the benefit of calculations is that they are freethinker. In this manner, they can be applied to any sickness, “he included. The primary test period of the new medication will be held in Japan. On the off chance that fruitful, the subsequent stage will be worldwide tests. The organization is as of now chipping away at potential prescriptions to treat malignant growth and cardiovascular infection. The designers trust that they will be prepared for clinical preliminaries of the particles before the year’s over.

Github Now Uses AI to Address Open Issues

Large open source projects on Github have lists of daunting issues that need to be addressed. To make it easier to locate the most urgent, GitHub recently introduced the “good first problems” feature, which associates contributors with problems likely to meet their interests.

The initial version, launched in May 2019, produced recommendations based on labels applied to problems by project managers. But an updated version delivered last month incorporates an artificial intelligence algorithm which, according to Github, surfaces for around 70% of the benchmarks recommended to users.

Github notes that this is the first deep learning compatible product to be launched on Github.com.

According to Tiferet Gazit, senior machine learning engineer at Github, last year Github performed analysis and manual curation to create a list of 300 label names used by popular open source repositories. (All of them were synonymous with “good first issue” or “documentation,” such as “friendly for beginners,” “easy bug fixes,” and “weak hanging fruit.”) But based on these, only 40 About% of the recommended benchmarks had problems that could be resolved. In addition, it left the burden of sorting and labeling issues with the project managers themselves.

The new AI recommendation system is largely automatic, however. But to build it, it was necessary to create a training set annotated with hundreds of thousands of samples.

Github started with problems that had one of some 300 labels on the organized list, which he supplemented with a few sets of problems that were also likely to be suitable for beginners. (This included those that were closed by a user who had never contributed to the repository, as well as closed problems that affected only a few lines of code in a single file.) After detecting and removing near-duplication problems , multiple trainings, Validation and test sets were separated between repositories to prevent data leakage from similar content, and Github trained the AI ​​system using only the pretreated and noised problem titles and bodies to make sure it detects the right issues as soon as they are opened.

In production, each problem for which the AI ​​algorithm predicts a probability higher than the required threshold is subject to recommendation, with a confidence score equal to its predicted probability. Open issues from unarchived public repositories that have at least one of the labels in the organized label list receive a confidence score based on the relevance of their labels, with synonyms for “good first broadcast” giving higher confidence than synonyms for “documentation. At the repository level, all the problems detected are classified mainly according to their confidence score (although label-based detections generally have higher confidence than ML-based detections), as well as penalty on the age of the problem.

According to Gazit, the data acquisition, training and inference pipelines operate daily, according to planned workflows to ensure that the results remain “fresh” and “relevant”. In the future, Github intends to add better signals to its benchmark recommendations and a mechanism for maintainers. and triagers to approve or delete recommendations based on AI in their repositories. And he plans to extend the problem recommendations to offer personalized suggestions on the next problems to solve to anyone who has already contributed to a project.

Artificial Intelligence and process automation in 2020 – Predictions

The development of artificial intelligence (AI) and robotic process automation (RPA) increased rapidly in 2019. This speed will increase further this year.

Envisaged with the potential of streamlining technologies’ workflows and improving customer service, organizations will deploy an increasing number of AI and RPAs. At the same time, the capabilities of these technologies will continue to grow rapidly, and they will increasingly work in areas requiring human intervention.

During 2020, five key trends will shape the AI ​​and RPA area:

Rise of the RPA robot

Existing projects involving the deployment of RPA robots tend to focus on copying existing tasks traditionally completed by humans.

The robots learned a repetitive task and completed it more quickly. While their AI capabilities allowed them to read and understand certain documents, the robots were only able to do this in a very strict, rules-based way.

RPA robots will increase capabilities in 2020

RPA will be more adept at making decisions based on the documents they scan and data from other sources. Using rapidly developing machine learning and artificial intelligence algorithms, robots will be able to work independently and add value to the organization in which they are deployed. It can also be used in more complex processes, including less repetitive and more open to interpretation.

For example, the RPA robot can evaluate each e-mail and determine how to respond or where it should be forwarded within the organization.

Analytics projects will continue to fail

Many applications and IT tools offer far more features and functionality than are used in an organization. This wasted capacity is called the “consumption gap..

Experience has shown that 75 to 90 percent of analysis projects have failed, often because the power of deployed technologies is far greater than the ability of users to benefit from them.

To overcome this, businesses need to invest in labor data literacy. Throughout 2020, they will have to think much more when designing and deploying new systems and make sure that the people who use them meet their needs and capabilities.

Routine studies will continue to disappear

The number of RPA robots and artificial intelligence chatbots will continue to grow within organizations throughout 2020. As a result, more than 50 percent of those currently considered routine and repeated work will disappear.

The continuous development of the artificial intelligence that powers these bots will increase their capabilities and make it easier for users to provide a satisfying experience. 

Increasing importance of data governance

More organizations will be aware of the importance of their data and the impact that can be felt if it is lost or mismanaged. Therefore, the responsibility of data management will be taken from the analytical team and handed over to senior managers.

Since more data is used by artificial intelligence tools and RPA robots, the management of master and metadata will become even greater concerns. Failure to protect this data at all times will have significant implications for an organization as a whole.

Data will become an additional revenue stream

By 2020, more organizations will understand the value of their data. Exactly how data can be presented effectively and the mechanisms to make money from this process will be an area that needs further consideration. 

An example is a retailer who has detailed knowledge of customer buying habits. In turn, this data can be used to plan and to advance future product ranges.

Amidst these trends, the adoption of AI and RPA robots will increase rapidly throughout 2020. As organizations understand the benefits that can be achieved, sound business situations for investment will be created.

What is Artificial Intelligence?


Artificial intelligence excited the minds of science fiction writers even before the first computer appeared. Of course, it was very tempting to have a car that would understand you as a person and do everything for you. At the same time, it could simply be created and that’s it. That is, it does not need to be hired, raised, educated, trained, and so on. Just collected it and she herself will somehow work.


In addition, she will not be tired, will not demand food, will not sleep, and will not be protected by law on all sides. Simply put, she will become an electronic slave and no one will object to this and seek the ethical side. Even this machine itself will only be glad of such a development.

The concept of intelligence has been formulated many times by almost everyone, starting from thinkers of the past. But the concept of artificial intelligence implies not just “the same thing, but artificial”, but in general something completely different.

In the early 1980s scientists with the surnames Barr and Feigenbaum, who worked closely with the theory of calculations, proposed a definition of artificial intelligence, which is still considered relevant.

Artificial Intelligence Definition

Artificial intelligence (AI) is a field of computer science that is engaged in the development of intelligent computer systems, that is, systems that have the capabilities that we traditionally associate with the human mind – understanding the language, learning, the ability to reason, solve problems, etc.

From the definition it follows that AI is not the final product, but only the “field of computer science”. In addition, the main words in the definition are “learning” and “the ability to reason”.

AI systems use algorithms and data to learn, reason, plan, and perceive situations in order to complete tasks autonomously. AI has been used in many fields, including healthcare, finance, robotics, gaming, and more.

AI algorithms are designed to make decisions, often using real-time data, unlike passive machines that are capable only of mechanical or predetermined responses.

Such features of AI are not able to replace human intelligence, since it implies much more, for example, the ability to express oneself, attachment, ethical principles, and much more. But he can perfectly cope with the ability to reason. The goals of artificial intelligence include computer-enhanced learning, decision making and problem solving.

Artificial intelligence can help fathom probably the most unpredictable difficulties that society has confronted and make the most secure, sound and prosperous world for everybody. In past posts, I have just mutual great open doors in medicinal services and horticulture. In any case, presumably, there is no territory where all the more fascinating – or more significant – openings would open up than the circle of instruction and the arrangement of abilities.

Some common qualities of artificial intelligence :

Capability to learn from data: AI systems can use data to improve their efficiency over time, without being actually clearly set.

Capacity to decide: AI can easily examine records and also choose based upon that review, without individual interference.

Potential to regard: Some AI systems can easily perceive their atmosphere and also translate aesthetic or auditory information.

Capability to process all-natural language: artificial intelligence can easily know as well as reply to human foreign language, featuring speech and also content.

Ability to cause as well as problem-solve: AI can easily use reasoning and thinking to handle concerns, discover trends, and also make predictions.

Potential to self-correct: AI systems may detect their own mistakes and adapt to boost their efficiency.

Potential to adjust to brand new situations: AI can easily conform to brand-new situations and environments, by gaining from brand-new information as well as experiences.

It is actually vital to keep in mind that certainly not all AI bodies possess every one of these high qualities and also the level to which they possess all of them differs.

Customized picking up utilizing AI to adjust showing strategies and materials to the requirements of individual understudies, computerized evaluation, killing the requirement for educators to take tests and saving more opportunity to work with understudies, astute frameworks that change understudies’ ways to deal with discovering data and cooperating with it.