Sign up for Actminds’ newsletter to get inspiring ideas to
transform your company into a digital company.

Bots

Apr 25, 2017

BOTS:

this is one of the top discussed topics for the last few years with the growing AI market and interest on automated chat conversations and tasks.

Along this overview, we will talk about Bots’ structure, functions, applications and give you a little example on how to set up Windows Cortana to be your personal customized Bot communication channel.

What does a Bot do?

In a nutshell, Bots are softwares designed to automate any kind of human tasks or functions, some more complex than others, but usually, simple tasks.

You can find Bots all around the world on different applications. Many e-commerce websites present a chatbot on the bottom corner, even Facebook is full of customized chatbots working for companies. Microsoft has the well-known Cortana, there’s Alexa from Amazon, Siri from Apple, Google assistant, etc.

You’ve probably already noticed that most of the examples mentioned above are somehow connected to a human communication channel, like chat or voice recognition. That’s because one of the main properties of a Bot is to communicate and react like a real human, and some of them are better than others on doing that, something that we are going see on the next topic.

 

Bots’ abilities

There are some specific characteristics that differ Bots from each other:

  1. NLP – Natural Language Processing
    This concept applies to a Bot on how good the software can understand and handle the human message, transmitted on different ways, using different words, languages. This is a key concept if you want to apply a more complex AI on your Bot software. This also makes the conversation more fluid and comfortable for the user who is interacting with the Bot.
  2. IVR – Interactive Voice Recognition
    Differently from NLP, this concept involves the Bot giving the user a range of options to deal with. It consists on a pre-registered digital menu, that can be selected to interact with a real human or to access another menu. A classic example of this is the call that we make to our internet provider when we have some problem and on the other side there is a machine providing us options to navigate through the system.
  3. Artificial Intelligence – Machine Learning
    Bots can be smart. They can implement a machine learning architecture to learn new things, like how to respond to new questions or how to perform new tasks, based on the analysis of the data provided. Cleverbot, a chatbot that learns new answers to different questions and interactions, is a good example of it.
    It’s important to mention that a Bot can, at the same time, implement all the concepts mentioned above. For example, a Bot that provides different menus or options by receiving a command, voice or chat, interpreting it, and learning from it.

Bot applications

Technically, they can do whatever you want. However, you need to configure them to it, develop their AI if necessary, set-up their communication channel, define their environment scope, and many other technical steps.

Depending on what you want your Bot to do, this can go from a very simple system to a very complex architecture. They can automate simple and routine tasks, or, on an enterprise environment, create an appointment or even an incident for the company’s infrastructure support team.

On games, Bots are very common. If you have played a game where the machine controls a character, that character is a Bot!

Bots aren’t only used for good purposes. Have you ever received a spam e-mail? Well, there is a highly chance that this e-mail was sent by a Bot.

Setting-up Cortana to be your personal customized Bot.

So, as we mention at the beginning of this article, together we will set up Cortana to work like our personal customized Bot communication channel. She will understand what you say and trigger different parts of your code or functions. Everything you will need is a Windows 10 OS, Visual Studio 17 with the UWP (Universal Windows Platform) package installed, a microphone and Cortana up and running.

  1. Start a new UWP Project
    As the first step, start a new UWP project on Visual Studio 17. You may be wondering: “Why an UWP project?”. That’s because Cortana can only communicate with UWP apps.
  2. Install Cortana recognized voice commands
    It takes two steps to set up Cortana voice recognition commands for your app: define your VCD file and install it.

a.Define your VCD File
At this step, we will define what phrases Cortana can understand to invoke certain functionalities from our App. The recognized voice commands are set-up in an XML file on your project.

Create a file .XML called “VoiceCommands” on your project.

There is a specific structure to build your XML file for Cortana (VCD – Voice Command Definition). You can find the documentation and examples Here.

For this example, we will use the following XML:

<?xml version="1.0" encoding="utf-8" ?>
<VoiceCommands xmlns="http://schemas.microsoft.com/voicecommands/1.2">
<CommandSet Name="CortanaBot" xml:lang="en-us">
<CommandPrefix>Integration test,</CommandPrefix>
<Example>Cortana Integrarion</Example>
<Command Name="showMyTasks">
<Example>Hey Cortana,Integration test, show my tasks</Example>
<ListenFor>show my tasks</ListenFor>
<Feedback>Okay</Feedback>
<Navigate/>
</Command>
</CommandSet>
</VoiceCommands>

Now, every time you speak on the microphone with Cortana, it will search by ListenFor tags values for one that matches the spoken phrase. If recognized, Cortana will recover the Name attribute from the Command tag and send it to a method inside our code.

For example, using the above XML file:

If you say “Hey Cortana, Integration test, show my tasks”.

Cortana will search for a ListenFor tag that matches the phrase, inside a XML file that contains the CommandPrefix “Integration test”, and it will find our “show my tasks”.
So now, Cortana takes the Command tag value (showMyTasks), and send it to our code.

Maybe you’ve noticed that weird “Integration test,” on the example mentioned above. That’s because, for Cortana to recognize your phrase, you need to invoke the CommandPrefix that you define in your XML. It means, on this example, that every time you want Cortana to search a command on your app, you need to mention “Integration test,” at the beginning of your phrase:

b.Install your XML file
On your app code, make a method that installs the XML that we’ve just created above.
Here is the code for our example:
private async void InstallVoiceCommands()
{
StorageFile vcdStorageFile = await Package.Current.InstalledLocation.GetFileAsync(@"VoiceCommands.xml");
Await Windows.ApplicationModel.VoiceCommands.VoiceCommandDefinitionManager.InstallCommandDefinitionsFromStorageFileAsync(vcdStorageFile);
}

It is a good idea to call this method whenever you start your app. That will guarantee that the XML is updated.

3. Start your code from voice command
Now, the last step to get Cortana working on your app is to define an entry point when Cortana calls it. And it’s very simple.
To allow Cortana to get into your app code, you need to override OnActivated method on your App.xaml.cs file. That’s all. You can even make a little test to know if your code was activated by voice or not (which is pretty useful).
Here’s the code:
protected override void OnActivated(IActivatedEventArgs args)
{
//Tests if activation happened by voice
if (args.Kind != ActivationKind.VoiceCommand)
{
return;
}
}

Now, you probably want to know which command Cortana recognized, right?
To do so, that’s what you need:
var commandArgs = args as VoiceCommandActivatedEventArgs;
var speechRecognitionresult = commandArgs.Result;

//get the recognized command
var recognizedCommand = speechRecognitionresult.RulePath[0];

//get the full spoken phrase converted on text.
var recognizedText = speechRecognitionresult.Text;

Now, variable “recognizedCommand” has the specific command that Cortana recognized and “recognizedText” has the full spoken phrase.
It is highly recommended that you check out the VCD documentation as we mentioned above. You can find it Here.

That’s it! We’ve set-up Cortana to trigger our Bot App using voice recognition!

Knowing which command Cortana understood, you can implement whatever you want your Bot to do, based on that command. And of course, if you want to, you can define multiple commands and even multiple languages on your XML file!

Bruno Belvedere is a passionate developer, very interested in new technologies, A.I. programming, games and sports.”

Custom Application Development
Application Modernization & Integration
Strategic Cloud Enablement
Application Managed Services

Learn More

Headquarters
 (650) 353-5019 1801 Market Street, 17th Floor Philadelphia, PA 19103 – USA

Pin It on Pinterest

Share This

Share this post with your friends!