Friday, April 7, 2023

What is artificial intelligence?

                     What is Artificial Intelligence?


In this 2004 study, John McCarthy gives the following definition of artificial intelligence (AI), despite the fact that there have been numerous other definitions over the last few decades (PDF, 106 KB) (link is external to IBM), "Making clever devices, particularly intelligent computer programmed, is a science and engineering endeavor. Although it is related to the related job of utilizing computers to comprehend human intelligence, AI should not be limited to techniques that can be observed physiologically."
Yet, years before this term came into being, Alan Turing's groundbreaking book "Computing Machines and Intelligence" (PDF, 89.8 KB) (link lives outside of IBM) marked the beginning of the artificial intelligence debate. Can machines think? is the question Turing, who is frequently referred to as the "father of computer science," poses in this essay. Next he proposes a test that has become commonly known as the "Turing Test," in which a human interrogator would attempt to differentiate between a computer-generated and a human-written text response. Although this test has been under intense criticism since it was published, it nonetheless contributes significantly to the history of AI and continues to be a topic of discussion in philosophy because it makes use of linguistic concepts.
After that, Art Russell and Peter Vigor published Artificial Intelligence: A Modern Approach, which went on to become one of the most influential works on the subject. In it, they explore four potential objectives or definitions of AI, differentiating between computer systems based on their reasoning and thinking vs acting:


Human perspective:

systems with human-like thinking
Systems that behave like people
Ideal strategy:

systems capable of rational thought
systems that function logically
Systems that behave like humans would fall under Alan Turing's notion of computers.

Artificial intelligence, in its most basic form, is a topic that combines computer science and substantial datasets to facilitate problem-solving. Moreover, it includes the branches of artificial intelligence known as deep learning and machine learning, which are commonly addressed together. These fields use AI algorithms to build expert systems that make predictions or categorise information based on incoming data.
The development of artificial intelligence is still the subject of much hype, as is the case with many newly introduced technologies. Product innovations like self-driving cars and personal assistants follow "a typical progression of innovation, from overenthusiasm through a period of disillusionment to an eventual understanding of the innovation's relevance and role in a market or domain," according to Gartner's hype cycle (link resides outside IBM). Artificial intelligence types: weak vs. strong
Weak AI, also known as Narrow AI or Artificial Narrow Intelligence (ANI), is AI that has been programmed and directed to carry out particular tasks. The majority of the AI that exists today is weak AI. This form of AI is anything but weak; it supports some incredibly sophisticated applications, including Apple's Siri, Amazon's Alexa, IBM Watson, and autonomous vehicles. "Narrow" could be a better term for it.
AI is made up of Artificial General Intelligence (AGI) and Artificial Super Intelligence (ASI). Artificial general intelligence (AGI), or general AI, is a theoretical form of AI where a machine would have an intelligence equaled to humans; it would have a self-aware consciousness that has the ability to solve problems, learn, and plan for the future. Artificial Super Intelligence (ASI)—also known as super intelligence—would surpass the intelligence and ability of the human brain. While strong AI is still entirely theoretical with no practical examples in use today, that doesn't mean AI researchers aren't also exploring its development. In the meantime, the best examples of ASI might be from science fiction, such as HAL, the superhuman, rogue computer assistant in 2001: A Space Odyssey.


Deep learning vs. machine learning

Since deep learning and machine learning tend to be used interchangeably, it’s worth noting the nuances between the two. As mentioned above, both deep learning and machine learning are sub-fields of artificial intelligence, and deep learning is actually a sub-field of machine learning.

Deep learning is actually comprised of neural networks. “Deep” in deep learning refers to a neural network comprised of more than three layers—which would be inclusive of the inputs and the output—can be considered a deep learning algorithm. This is generally represented using the following diagram:

The way in which deep learning and machine learning differ is in how each algorithm learns. Deep learning automates much of the feature extraction piece of the process, eliminating some of the manual human intervention required and enabling the use of larger data sets. You can think of deep learning as "scalable machine learning" as Lex Fridman noted in same MIT lecture from above. Classical, or "non-deep", machine learning is more dependent on human intervention to learn. Human experts determine the hierarchy of features to understand the differences between data inputs, usually requiring more structured data to learn.

"Deep" machine learning can leverage labeled datasets, also known as supervised learning, to inform its algorithm, but it doesn’t necessarily require a labeled dataset. It can ingest unstructured data in its raw form (e.g. text, images), and it can automatically determine the hierarchy of features which distinguish different categories of data from one another. Unlike machine learning, it doesn't require human intervention to process data, allowing us to scale machine learning in more interesting ways.

Artificial intelligence applications

There are numerous, real-world applications of AI systems today. Below are some of the most common examples:

  • Speech recognition: It is also known as automatic speech recognition (ASR), computer speech recognition, or speech-to-text, and it is a capability which uses natural language processing (NLP) to process human speech into a written format. Many mobile devices incorporate speech recognition into their systems to conduct voice search—e.g. Siri—or provide more accessibility around texting. 
  • Customer service:  Online virtual agents are replacing human agents along the customer journey. They answer frequently asked questions (FAQs) around topics, like shipping, or provide personalized advice, cross-selling products or suggesting sizes for users, changing the way we think about customer engagement across websites and social media platforms. Examples include messaging bots on e-commerce sites with virtual agents, messaging apps, such as Slack and Facebook Messenger, and tasks usually done by virtual assistants and voice assistants.
  • Computer vision: This AI technology enables computers and systems to derive meaningful information from digital images, videos and other visual inputs, and based on those inputs, it can take action. This ability to provide recommendations distinguishes it from image recognition tasks. Powered by convolutional neural networks, computer vision has applications within photo tagging in social media, radiology imaging in healthcare, and self-driving cars within the automotive industry.  
  • Recommendation engines: Using past consumption behavior data, AI algorithms can help to discover data trends that can be used to develop more effective cross-selling strategies. This is used to make relevant add-on recommendations to customers during the checkout process for online retailers.
  • Automated stock trading: Designed to optimize stock portfolios, AI-driven high-frequency trading platforms make thousands or even millions of trades per day without human intervention. 

 

History of artificial intelligence: Key dates and names

The idea of 'a machine that thinks' dates back to ancient Greece. But since the advent of electronic computing (and relative to some of the topics discussed in this article) important events and milestones in the evolution of artificial intelligence include the following: 

  • 1950: Alan Turing publishes Computing Machinery and Intelligence. In the paper, Turing—famous for breaking the Nazi's ENIGMA code during WWII—proposes to answer the question 'can machines think?' and introduces the Turing Test to determine if a computer can demonstrate the same intelligence (or the results of the same intelligence) as a human. The value of the Turing test has been debated ever since.

  • 1956: John McCarthy coins the term 'artificial intelligence' at the first-ever AI conference at Dartmouth College. (McCarthy would go on to invent the Lisp language.) Later that year, Allen Newell, J.C. Shaw, and Herbert Simon create the Logic Theorist, the first-ever running AI software program.
  • R
  • 1967: Frank Rosenblatt builds the Mark 1 Perceptron, the first computer based on a neural network that 'learned' though trial and error. Just a year later, Marvin Minsky and Seymour Papert publish a book titled Perceptrons, which becomes both the landmark work on neural networks and, at least for a while, an argument against future neural network research projects.
  • 1980s: Neural networks which use a backpropagation algorithm to train itself become widely used in AI applications.
  • 1997: IBM's Deep Blue beats then world chess champion Garry Kasparov, in a chess match (and rematch).
  • 2011: IBM Watson beats champions Ken Jennings and Brad Rutter at Jeopardy!
  • 2015: Baidu's Minwa supercomputer uses a special kind of deep neural network called a convolutional neural network to identify and categorize images with a higher rate of accuracy than the average human.
  • atOptions = { 'key' : 'a3dc2df5104184fa1d39cfb62d448fe3', 'format' : 'iframe', 'height' : 90, 'width' : 728, 'params' : {} }; document.write('');
  • 2016: DeepMind's AlphaGo program, powered by a deep neural network, beats Lee Sodol, the world champion Go player, in a five-game match. The victory is significant given the huge number of possible moves as the game progresses (over 14.5 trillion after just four moves!). Later, Google purchased DeepMind for a reported USD 400 million.
                                                           Written by Rashid Iqbal




What is artificial intelligence?

                      What is Artificial Intelligence? In this 2004 study, John McCarthy gives the following definition of artificial in...