Exciting Advances in Natural Language Processing

Artificial Intelligence has come a long way since the days of algorithmic chess games. It’s also continuing to evolve all of the time. Weirdly, not that dissimilar to humans…

Alan Turing hypothesized that, if humans have the ability to use what we’ve learned to inform our decisions, then there should be no reason that machines can’t do the same. He even wrote an infamous paper on it.



At the time, however, technology was nowhere near having the capabilities it does now. Computers could only execute programs, meaning they couldn’t actually take anything in.

In the more-than-half-a-century since Turing’s paper, technology has come on leaps and bounds. We now have Alexa, Siri, and Google Assistant sitting in our homes, adjusting the heating at our will.

Chances are, you’re reading this from a small device you’re holding in your hand. Inside of which, there’s a program that’s continuously learning. It’s pretty cool when you think about it.

Artificial intelligence has, indeed, come a long way, but there are still things we need to be working on. Natural language processing (NLP) is one of those things.

In a world closing in on itself, computers and NLP are another way to feel connected. Although, there’s a long way to go until we’re chatting to Alexa the way we would a friend.

Everyday advances in NLP


The English language has so many words that it’s impossible for anyone to put an estimate on how many. Furthermore, a lot of those words has more than one meaning.

Natural language processing is a branch of computer science dedicated to making it possible for computers to process language the way we do. At least, that’s the goal.

Computers can already understand us to a degree. You might’ve even had a bit of fun citing Fight Club with Alexa, but we still have some way to go before computers actually understand the intent of our sentences.

Understanding common sense and reasoning


Unfortunately, AI’s biggest shortcoming is with context. Every day, computer scientists are working to find ways to help computers interact with humans naturally.

One of these ways is with common sense reasoning. By collecting logical assumptions and teaching them to computers, we can make steps toward computers processing language the way that we do.

As mad as it sounds, for all of the words and contexts we know, there are some we take for granted. Take a Mr. Man book, for example.

Mr. Funny is funny, hence his name. We know that, but it’s not something we think about. It’s sort of like saying, “the green light means GO.”

We know that and kids know that, but the world’s most intricate computer systems don’t.



By teaching computers these little snippets of common sense, there’s a good chance we’ll begin to build a more humanistic language.

Sentiment analysis and sincerity


According to the philosopher, Jean-Jacques Rousseau, authenticity is derived from the natural self, whereas inauthenticity is a result of external influences. He might as well have been talking about NP.

Authenticity and sincerity (and their counterparts; fakeness and dishonesty) are fundamental parts of the way we communicate. Although we tend to use tone of voice or body language.

Detecting sentiment in text is a little more difficult. Despite its pitfalls, we’ve found ways to adapt thanks to years of texting, email, and instant messaging.

AI tools have been parsing text for about as long as they’ve been able to. However, they’re not very good when it comes to reading into the sentiment behind the text.

You know when you’ve annoyed your partner when they text you saying, “I’m fine.” Computers, on the other hand, really don’t.



AI automatically takes “I’m fine” as “I’m okay.” Which begs the question: How do people think AI relationships are easy?

A good example of sentiment analysis in text is the buffalo sentence. You know, Buffalo buffalo Buffalo buffalo buffalo buffalo Buffalo buffalo.

It’s completely sincere and grammatically correct. There are no errors and no falseness whatsoever. At the moment, computers and programs don’t understand either of these things.

However, studies and tests are ongoing even as we speak. Check out this Kaggle thread on using algorithms to weed out insincere questions from Quora.

Ongoing research into NLP


Research is always ongoing in the realm of not only NLP but AI in general. In fact, we’ll likely never discover the full capabilities of AI systems.

But that isn’t going to stop us from trying. Some of the research on the go currently is exciting.

Will chatbots be able to identify context and emotion?


As progress continues to be made in common-sense reasoning, sincerity, and sentiment analysis, we’ll get closer to finding this out.



Chatbots already help a lot of websites to run, but they aren’t exactly independent thinkers. Research on whether or not we’ll be able to get them to process context is ongoing.

● Will they be able to remember context they learn through conversation?
● Can they process emotion and respond accordingly? (For example, if a customer is disappointed.)
Will they eventually replace humans in customer service?

Can AI not only generate a joke but understand why it’s funny?


Ask Alexa to tell you a joke and she’ll tell you one that’s been pre-programmed into her. And they’re usually as bad as the jokes in Christmas crackers.

● Will there be a point wherein AI can generate their own jokes? Probably not.

Although, there are machines that have been created specifically to tell jokes. But the jury is out on whether or not that’s a good thing.

Will AI be able to summarise an entire book?


Research on this has been going on for a while now, and we’ve seen impeccable results for text. The next step is to see whether or not AI will eventually be able to summarise books.

● What advances will we see in fields like the medicine and law?
● How will it affect or help with school and university reports?

Will they be able to do all of this unsupervised?


Here therein lies the biggest question of them all.

Catherine Havasi, CEO of Luminoso, says that, without common sense reasoning, it’ll be hard to develop unsupervised systems. Although it’s not stopping anyone from trying.

Unsupervised learning is sort of like teaching yourself an instrument. Computers are given uncategorized, random bits of data and they’re left to work it out by themselves.

But will they?




Problems we might see


Of course, nothing can make progress without possible pitfalls and programs rearing their ugly heads. That’s just the way of progress.

One particular advancement that has people concerned is AI’s potential ability to write. Be it articles or theses, AI journalism is a thing.

But is it really a good thing?

If AI used results to automatically generate text, how will it be consistently 100% plagiarism-free? After all, Copyscape and other tools run on algorithms themselves.

Natural language processing is advancing at rocket speed; we’re going to be seeing it used all the more. Be it in marketing, education, chatbots, or buffalo.

Artificial intelligence opens up an exciting realm of new possibilities. We just have to keep up.