Skip to main content

Posts

the Natural Language App, part 2

  In part one of this article [9] we discussed the different kinds of chatty AI interfaces and the merits of a mixed natural-language GUI interface. Now we will dig a little deeper in what is underneath the covers of a Natural Language Application (NLA). Natural Language Processing Components Natural Language Processing (NLP) has been around since the 1950s. We will exclude speech-to-text interface in this part of the discussion. Such interfaces have their own unique challenges but output / provide mostly similar “text” to an NLA. We will also only discuss an English NLA. Language with different glyphs, syntax and grammar have to be dealt with separately. NLP is a cross discipline between Linguistics and Computer Science. It consists of taking raw strings of text of a language, and breaking it down into various components for classification. It usually consists of: Sentence boundary detection (finding the unique sentences in some text) Sy
Recent posts

the Natural Language App, part 1

Introduction Natural Language Processing (or NLP) is the art of taking human written language (or indeed human spoken language) and analyzing it to use it in some form or fashion.  Advances in natural language processing have made it possible to embed human language understanding in software applications.  Things as personal assistants and bots are now common-place.  The next step is a more integrated approach, the nl-app.  An nl-app is architecturally different and has other architectural concerns, but that is for part 2 of this article. Before we start discussing this, we'll take a small detour through existing solutions and why I think there is a difference. Personal assistants have been a series of new devices like Alexa, Echo, Google-home, Siri, Bixby and a few others.  These are stand-alone devices, usually with their own application API.  There is great potential for such devices to interface with the Internet of Things (IoT), ordering online and other use cases.  H

SimSage

Design of an Interactive A.I. for help desks, and the Internet of things Sean Wilson and I started a semantic search company over a decade ago.  This started my foray into  intelligent systems, big data, and artificial intelligence. We left this company after eight years of hard  work. This company is still operational today and doing well. I always felt that there was something missing from a search only solution.  First I tried to make the  search more intelligent. I tried many different approaches.  Focused on getting better Word Sense  Disambiguation (WSD) using neural networks. WSD can be thought of as being able to tell ambiguous  usage of words apart.  “Jaguar”, are we talking about the car or the animal? “bank”, did they mean a  financial institution or the side of a river? This can usually be resolved from the immediate or larger  context of whatever it is you’re looking at. This only led to better information retrieval, not anything remotely intelligent

On reality and labels

Our reality is based on categorizations spun from our minds.  For a thing to be defined by science, it needs an objective identity.  Something that makes it irrefutably unique.  What makes a thing unique apart from the thing itself, are the words and symbols used to categorize it.  But those words and symbols themselves can only be defined by other words and symbols.  And reality would have it that everything is unique, for no parts are shared.  And yet we insist of putting labels on things, and wonder why they don't fit.