What is Artificial Intelligence (AI)? Everything you need to know and how we at Lynes use it.

Note: This post was written in 2022.

Remember that scene in the TV show “Friends” when Joey is asked,

Let me ask you one question. Do your friends ever have a conversation and you just nod along even though you’re not really sure what they’re talking about?

That’s pretty much been me with artificial intelligence (AI).

I’m familiar with the term and the basic concept of it, but when some expert starts going on about AI winter, and deep learning, I’m definitely caught off-guard. And just like Joey, I simply nod in agreement, hoping my cluelessness goes unnoticed.

So, I thought I’d do myself (and maybe you) a favor by answering the questions: what is AI, how does it work, and how is it used?

To help, I’ve got the entire internet at my disposal, and also Johan Åberg, our CPO at Lynes.

You see, Johan has done research in computer science at Linköping University, including in Artificial Intelligence.

With that being said, don’t blame Johan for this blog post (things might get awkward amongst his old researcher buddies).

Want to become an AI expert? Tag along!

What is AI?

In my quest for knowledge, I scheduled a lecture with Johan to answer the question of what AI is, and here’s my brief takeaway:

Artificial Intelligence (AI), also known as machine intelligence, is a machine’s ability to mimic our behavior and natural intelligence.

This includes cognitive functions such as learning from past experiences, problem-solving capabilities, and planning and executing a series of actions with the intent to generalize.

It’s also the name of the field of study where one learns and explores how to create computer programs with intelligent behaviors.

When was it “invented”?

John McCarthy coined the term and defined what AI is at the Dartmouth conference in 1956. That’s considered the start, even though Alan Turing was already working with intelligent machines in the 1940s when he cracked the Enigma code, paving the way for the Allies’ victory in World War II.

Anyway, back to Dartmouth, where Bearded men gathered to the introduction:

We propose that a 2 month, 10 man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire. The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.– McCarthy, 1955

And just like that, AI was “invented” and defined. Simple as that. Sort of.

The Dunning-Kruger Effect

When looking at the subject and its development, it’s easiest to use the “Technology Hype Cycle,” which consists of five phases defined by Gartner. They use a Dunning-Kruger curve to illustrate it:

  1. Innovation Trigger
  2. Peak of Inflated Expectations
  3. Trough of Disillusionment
  4. Slope of Enlightenment
  5. Plateau of Productivity

Here’s what it looks like for AI in 2022.

Source: Gartner

This cool graph thus shows how technology emerges over time. It usually starts with a hype, you know, a classic buzz where the media and others write and talk about it.

Then it peaks and turns down when everyone realizes that the thing doesn’t quite work – yet.

After awhile, some stuff reach “the final station”, or the plateau of productivity. These are the things that are implemented in programs and apps that actually work and make a real contribution.

Research in AI

A lot has happened in the sphere since the 1950s. Fortunately, it’s not just bearded men driving the research forward anymore. Today, it is led by men (with or without beards), women, and non-binary.

This is done, among other places, at OpenAI, which started as a non-profit organization founded by Elon Musk and others. Today, they conduct ethical AI research, in other words – ensuring robots don’t take over the world. OpenAI is well-funded, with Microsoft investing around one billion dollars.

They have several active AI-stuff, such as DALL-E 2 & text summarization.

DALL-E 2 is an AI that creates an image based on the text you provide it; simply put, a digital artist. If I give the input “dog walking in the desert with an astronaut, in retro style,” I get this image:


Text summarization does exactly what it sounds like; it can take a book/text and give you a brief summary of its content, so you don’t have to read the whole thing. Imagine if this existed back when you were in school.

In the example below, it has read “Alice in Wonderland”, analyzed it, broke it down into sections, and came up with a summary that is actually very good! (NOTE: My assessment is based on having watched the movie, not reading the book.)

Spoiler alert

Image: OpenAI

These two functions can be used today via an API or integrated into custom apps.

Swedish research

We Swedes can also proudly say that we are involved in and conducting research in the field.

The Wallenberg family is heavily investing in what is one of the largest research projects in Swedish University history. It’s conducted at WASP (Wallenberg AI, Autonomous Systems and Software Program). Not to be confused with W.A.S.P., a heavy metal band that was big in the ’80s.

One reason why this is so essential is that research primarily occur in English. This means that some data required for algorithms for learning and natural language processing is missing because we don’t have the same basis for Swedish sound and text.

It becomes especially important when we throw dialects into the equation. Sorry, Skåne, but I’m talking about you.

AI in everyday life

What everyone might not realize is that AI is everywhere, and we encounter it daily.

For instance, it can be in the form of:

  • Self-driving cars – perhaps you’ve heard of Tesla? When founder Elon isn’t buying micro-blogs, he and his team are heavily involved in self-driving cars.
  • When you do a search on the internet (maybe that’s how you found this article?)
  • Smart homes – such as homes that learn your schedule and adjust the temperature based on the outdoor temperature, etc.
  • E-commerce and marketing – as you probably know, it’s no coincidence that you are fed images of Air Fryers after browsing those types of sites.

AI for Business Telephony and Contact Centers

Okay, it’s everywhere. But what does it look like within business telephony? What’s required and what obstacles exist?

Two critical prerequisites are that it needs to work in real-time and on the web.

AI (in real-time)

Here, there has been a shift in trends, moving from algorithms that were meant to offer high precision to those with quick execution. (Quick execution is a more refined way to say fast performance).

This shift is partly because today’s algorithms have such good precision and partly because commercial solutions need swift executions to function in a service.

This trend is visible in the new special chips required to process this kind of functionality, coming from manufacturers like Intel, AMD, NVIDIA, and Apple. Now, these chips are starting to appear in “regular” mobile phones and computers, enabling a plethora of things for us end-users.

AI on the Web

For you to access various cool features, AI needs to be executed in real-time and on the web. With “on the web,” I mean in a tab in your browser or in a mobile app.

This is made possible by:

  • WebAssembly, which allows running hardware-close code in a web browser, like heavy-duty code for image analysis and augmented reality. Put simply, it allows for intensive operations to be run in your browser by leveraging the power of the hardware.
  • WebGL, an API that renders 3D graphics in web browsers (OpenGL).
  • WebGPU, an API for direct access to operations in the computer’s GPU via the browser. This means it can leverage the computer’s graphics processor to make things faster… and better.
  • Open Source frameworks, where Tensor flow, for instance, is an AI engine for web browsers. Open Source refers to open-source code available for others to use.

AI Features in Lynes

Now that the technical prerequisites are in place, we can start (and continue) implementing AI-based functionality in Lynes.

Whether it’s thanks to Johan Åberg, McCarthy, or WebGPU, I won’t say.

Just kidding, of course, it’s all thanks to you, Johan <3

Some good examples for us is:

  • Face Detection in Conferences: The technology can detect faces in the image and display a cropped & zoomed-in picture of the person. It can also track multiple faces in the same video feed, allowing for a wider view in, say, a conference room.
  • Selfie Segmentation: It identifies and cuts out body areas, enabling functions like a blurred background, a press wall, or an animated background.
  • Background Noise Removal: The app learns what is human speech and what isn’t. In this way, it can amplify that sound (if needed) and reduce distracting background noise.

Being just the right amount of self-centered, I chose to illustrate face detection and selfie segmentation with a picture of myself.

The image demonstrates how we currently work with face detection and selfie segmentation and then place the person against any desired background, press wall, or animated image.

 

Nearby AI Solutions in the Industry

Of course, I couldn’t resist. Wanting more, I asked Johan how AI can be used in the future and what kind of functions we can expect (demand) going forward.

In a customer care, speech-to-text is one such thing which would mean calls are automatically transcribed and even summarized in text form.

Another cool thing is the automatic categorization of calls, like a satisfied customer vs. a dissatisfied customer. Then the technology could draw a conclusion based on what was said and how the customer expressed themselves during the call and then categorize it.

Looking a further into the future, we find even juicier stuff like general Artificial Intelligence.

Imagine a Bot that learns a service and then can autonomously interact with customers and handle support cases using natural language and self-learning.

Sooner or later some of us might be saying #ABotTookMyJob

Summary

I’ve already mentioned it, an exciting feature that is “alive” is text summarization.

I currently don’t have access to such a “gadget,” so I simply have to apply the classic “If you want something done, do it yourself.”

So here’s my own version of an AI-based function where I summarize the entire post above with only 91 words:

Joey in Friends is less intelligent than his friends. He and Filip don’t know much about AI. AI has seasons. Filip googles and talks to Johan who has researched AI. Alan Turing ends World War II, and bearded men attend a conference. Gartner uses big words like Disillusionment. W.A.S.P. is a heavy metal band from the 80s. Intel and AMD manufacture chips. Lynes has AI functions in the app thanks to Johan and more always wants more.

Et voilá!

Want to know more about our phone system?

Leave your email and we'll get back to you!


Written by

Filip Flink

Självutnämnd digitalvetare som ser trender innan trenden själv ser det. Har även en förmåga att överdriva saker. Fast bara ibland.