Skip to main content

SimSage

Design of an Interactive A.I. for help desks, and the Internet of things
Sean Wilson and I started a semantic search company over a decade ago.  This started my foray into 
intelligent systems, big data, and artificial intelligence. We left this company after eight years of hard 
work. This company is still operational today and doing well.

I always felt that there was something missing from a search only solution.  First I tried to make the 
search more intelligent. I tried many different approaches.  Focused on getting better Word Sense 
Disambiguation (WSD) using neural networks. WSD can be thought of as being able to tell ambiguous 
usage of words apart.  “Jaguar”, are we talking about the car or the animal? “bank”, did they mean a 
financial institution or the side of a river? This can usually be resolved from the immediate or larger 
context of whatever it is you’re looking at.

This only led to better information retrieval, not anything remotely intelligent, clever as it was.  
“A shift in paradigm and thinking was needed”

This has been achieved with SimSage.



                                       
                        Diagram 1: Simsage intent neural network

We expose functions in sets.  This is so that different users of Simsage can customize their interactive 
experience and needs.  A user inside the enterprise might want to use our system as a Q/A system with 
a semantic search engine, typing on a keyboard.  Whereas a user calling into the help desk over a phone 
line might not want semantic search as a fall back when functions can’t be matched, but instead talk to a 
human operator.  Each user can be setup with a profile that selects functionality groupings we call modules.

                        Diagram 2: Simsage vanilla built-in user module selector


Simsage is thus a platform that mixes logic with neural networks.  We have deep learning neural networks
 for Speech to Text (Mozilla DeepSpeech) and Google Cloud’s Speech to Text.  We have our own neural 
networks for Word Sense Disambiguation, Speech synthesis, and User Intent detection.


                        Diagram 3: Simsage Question and Answer pipeline

Different clients connect to our platform using RESTful JSON.
 
                        Diagram 4: sample web U.I. for Simsage query interaction

This interface is fully multimedia capable as well.  We’ve added some smarts for importing PDF manuals 
and marking up data and images in the PDF for interactivity.

           
            Diagram 5: multimedia data and context sensitive query functionality


                    Diagram 6: semantic search, used only if enabled for the user

Our pure speech to text input doesn’t have a U.I.  We process continuous speech using our own threshold 
algorithms for silence detection, and stream speech to Google when we hear the user talk.  Feedback is 
provided through speech and sounds.

Listen to the following video, running a speech to text session with Simsage using a faceless Java client.



We are well on our way and have moved beyond search.  I hope this article can be of some help to other
fellow explorers.

Feel free to contact Sean or myself at rock@simsage.nz OR try the product at https://simsage.nz

Popular posts from this blog

the Natural Language App, part 1

Introduction Natural Language Processing (or NLP) is the art of taking human written language (or indeed human spoken language) and analyzing it to use it in some form or fashion.  Advances in natural language processing have made it possible to embed human language understanding in software applications.  Things as personal assistants and bots are now common-place.  The next step is a more integrated approach, the nl-app.  An nl-app is architecturally different and has other architectural concerns, but that is for part 2 of this article. Before we start discussing this, we'll take a small detour through existing solutions and why I think there is a difference. Personal assistants have been a series of new devices like Alexa, Echo, Google-home, Siri, Bixby and a few others.  These are stand-alone devices, usually with their own application API.  There is great potential for such devices to interface with the Internet of Things (IoT), ordering onlin...

the Natural Language App, part 2

  In part one of this article [9] we discussed the different kinds of chatty AI interfaces and the merits of a mixed natural-language GUI interface. Now we will dig a little deeper in what is underneath the covers of a Natural Language Application (NLA). Natural Language Processing Components Natural Language Processing (NLP) has been around since the 1950s. We will exclude speech-to-text interface in this part of the discussion. Such interfaces have their own unique challenges but output / provide mostly similar “text” to an NLA. We will also only discuss an English NLA. Language with different glyphs, syntax and grammar have to be dealt with separately. NLP is a cross discipline between Linguistics and Computer Science. It consists of taking raw strings of text of a language, and breaking it down into various components for classification. It usually consists of: Sentence boundary detection (finding the unique sentences in some text)...

On reality and labels

Our reality is based on categorizations spun from our minds.  For a thing to be defined by science, it needs an objective identity.  Something that makes it irrefutably unique.  What makes a thing unique apart from the thing itself, are the words and symbols used to categorize it.  But those words and symbols themselves can only be defined by other words and symbols.  And reality would have it that everything is unique, for no parts are shared.  And yet we insist of putting labels on things, and wonder why they don't fit.