Bots

Multi Language Chat Bot Suggested Architecture

Natural conversations, by their very nature, allow for the flexibility of switching language mid-conversation. In fact, for multi-lingual individuals such as my brothers and me, switching between various languages allows us to emphasize certain concepts without explicitly stating so. We generally speak in Polish (English if our wives are present), English to fill in words we don’t know in Polish and Spanish to provide emphasis or a callback to something that happened in our childhood growing up in Puerto Rico. Chat bots, in their current state without Artificial General Intelligence, does not allow for the nuance of language choice. However, given the state of language recognition and machine translation, we can implement a somewhat intelligent multilingual chat bot. In fact, I design and develop the code for an automated approach in my book. In this post, I outline the general automatic approach below. Afterwards, I highlight the downsides of this approach and list the different problems that need to be solved when creating a production quality multi language chat bot experience.

A Naive Approach

I call the fully automated approach naive. This is the type of approach most projects start off with. It’s somewhat easy to put in place and moves the project into the multi lingual realm quite quickly. It comes with its set of challenges. Before I dive into those, let’s review the approach. Assume we have a working English natural language model and English content, the bot can implement multi lingual conversations as follows.

  1. Receive user input…
  2. … in their native language.
  3. Detect the user input language and store in user’s preferences.
  4. If incoming message is not English, translate into English.
  5. Send English user utterance to NLU platform.
  6. Execute logic and render English output.
  7. If user’s language was not English, translate output into user’s native language.
  8. Send response back to user.

This approach works but the conversation quality is off. Although machine translation has improved by leaps and bounds, there are still cases in which the conversation feels stiff and culturally disconnected. There are three areas where this approach suffers.

  • Input utterance cultural nuances: utterance translation can sometimes feel awkward, especially for heavy slang or for highly proprietary language. NLU model performance suffer as a result.
  • Ambiguous language utterance affect conversation flow: a word like no or mama can easily turn conversation into another language. For example, in some language detection engines, the word no gets consistently classified as Spanish. If the bot were to ask a yes/no question, answering no will trigger a response in Spanish.
  • Output translation branding quality: although automatic machine translation is a good start, companies and brands that want fine tuned control over their bot’s output will cringe at the output generated by the machine translation service.

Moving to a Hybrid Managed Approach

I address each issue separately. The answer to these problems vary based on risk aversion, content quality and available resources. I highlight options for each item as we progress through the items.

Multi Language NLU

Ideally, I like my chat bot solutions to have an NLU model for each supported language. Obviously, the cost of creating and maintaining these models can be significant. For multi language solutions, I always ask for the highest priority languages that a client would like to support. If an enterprise can support 90% of employees by getting two languages working well, then we can limit the NLU scope to those two languages, while using the automatic approach for any other languages. In many of my projects, I use Microsoft’s LUIS. I might create one model for English and another one for Simplified Chinese. That way, Chinese users don’t suffer the nuanced translation tax. Project stakeholders also need to decide whether the chat bot should support an arbitrary amount of languages or limit the valid inputs to languages with an NLU model. If it does, the automatic approach above will be applied to non-natively supported languages.

Ambiguous Language Detection

The issue with ambiguous language detection is that short utterances may be valid utterances in multiple languages. Further complicating the matter is that the translation APIs such as Microsoft and Google’s do not return options and confidence levels. There are numerous approaches in terms of resolving the ambiguous language problem. Two possible approaches are (1) run a concatenation of the last N user utterances through the language recognition engine, or, (2) maintain a list of ambiguous words that we ignore for language detection and use the user’s last utterance language instead. Both are different flavors of simply considering the user’s language preference as a conversation level rather than message level property. If we are interested in supporting switching between languages mid conversation, a mix of both approaches works well.

Output Content Translation

Similarly to the Multi Language NLU piece, I encourage clients to maintain the precise localized content sent by the chat bot, especially for public consumer or regulated industry use cases where any mistranslated content might result in either pain for a brand or fines. This, again, is a risk versus effort calculation that needs to be performed by the right stakeholders. The necessity of controlling localized content and the effort involved in it typically weighs on whether the bot supports arbitrary languages or not.

Final Architecture

Based on all the above, here is what a true approach to a multi lingual chat bot experience would look like.

The bot in this case:

  1. Receives user input…
  2. … in their native language.
  3. Detects the user input language and store in user’s preferences. Language detection is based both on an API but also on utterance ambiguity rules.
  4. Depending on the detected language…
    1. If we have an NLU model for the detected language, the bot queries that NLU model.
    2. If not, assuming we want to support all languages, the bot translates the user’s messages into English and uses the English NLU model to resolve intent. Assuming we want to support a closed set of languages, the bot may response with a not recognized kind of message.
  5. Executes the chat bot logic and render localized output.
  6. If user’s language was not English and our bot support arbitrary languages, the bot automatically translates the output into user’s native language.
  7. Sends response back to user.

The managed models and paths to automatic translation add nuance to the automatic approach. If we imagine a spectrum in which on one end we find the fully automatic approach and on the other end the fully managed approach, all implementations fall somewhere within this spectrum. Clients in regulated industries and heavily branded scenarios will lean towards the fully managed end and clients with internal or less precise use cases will typically find the automatic approach more effective and economical.

The hybrid managed/automatic implementation does take some effort but results in the best conversational experience. Let me know your experience!

 

Posted by Szymon in Bots

Dynamically Rendered Graphics for Conversational Experiences

About a year and a half ago, my team and I embarked on a journey to build a chat bot for a client in the financial industry. They had a remarkable amount of market and education data. One of our goals was to figure out the best way to consume all of that data and communicate it back to the user. In a text-only world, sending back this amount of data would be incredibly verbose.

To illustrate the point, let’s take a look at what data a financial stock quote may communicate. At a minimum, a quote is composed of the last price, change and change percentage for the latest trading session. In general, it is also useful to know the opening, high and low price for the day. The 52-week high and low are relevant as they give us more context around what the stock was doing over the last year. For example, in the Google Finance card below, we can tell that in the last year, Amazon had a low of $931 and since then has doubled. Crazy! A quote may have other information like the bid/ask prices and sizes. All this information is a Level I quote.

Say a user asked for a Amazon.com quote. What would a text message with all this data look like? Maybe something as follows:

The latest price for AMZN (Amazon Inc) was $1,788.02 at 9:48 AM EDT. This is a change of $8.80 (0.49%) for the day. The open price was $1,786.49 and the high and low are $1,801.83 and $1,741.64 respectively. The 52-week high and low are $1,880.05 and $931.75 respectively.

It should be clear that parsing through this text for every quote is mentally exhausting. It is not immediately clear if the stock is up or down. The color for the change is a nice touch in the card, something we lack in the text. The open, high, low and 52-week prices all blend in. If we were to ask for a few quotes in succession, we would develop a headache because of the massive amount of gymnastics the brain would have to go through. To many, all of this is obvious. It wasn’t to me when I first entered this space.

You sold me, now what?

Hopefully you agree that a graphical display of the financial data is easier to digest and more effective at conveying the information. In fact, this approach not only applies to financial data, but any other graphics. Take a chart of historical weather averages. Perhaps as part of a weather bot, we would like to display a chart of the last month of temperatures. Maybe a chart of the Los Angeles daily high and lows, as well as hourly temperatures.

How do we go about generating a graphic like this to incorporate in our bot’s response?

This question has come up in various projects that I’ve been a part of. HTML and CSS always seemed like a good approach. The problem was that it is difficult find a library that can take arbitrary HTML/CSS input and result in a faithful rendering of the web standards. In fact, this is usually an exercise in futility. For instance, in our .Net based project we found some old libraries that ignored most modern web development techniques; we could only specify font sizes in pixels inline. What we really wanted was a WebKit (Apple) or Chromium (Google) based library maintained by a reputable party to do the work for us.

Headless Browsers

Headless browsers have been around for some time. One of the better-known classics may be PhantomJS (development has been suspended as of March 2018). The concept is to run an entire instance of a browser without displaying any user interface. The main use case for these would be something like automated unit and functional JavaScript tests. If functional tests on Single Page Apps were failing, it would be useful to take a screenshot of what the app looked like at the time.

Google’s Chrome gained a headless mode in 2017. One of the more exciting projects, Puppeteer, is a Node API for Headless Chrome maintained by the Chrome Dev Tools team. With Puppeteer, we can run scripts like this one below from the examples. It loads a page, enters text into an input box to search for articles and then scrapes the resulting page (source: https://github.com/GoogleChrome/puppeteer/blob/master/examples/search.js).


const puppeteer = require('puppeteer');

(async() => {
  const browser = await puppeteer.launch();
  const page = await browser.newPage();

  await page.goto('https://developers.google.com/web/');

  // Type into search box.
  await page.type('#searchbox input', 'Headless Chrome');

  // Wait for suggest overlay to appear and click "show all results".
  const allResultsSelector = '.devsite-suggest-all-results';
  await page.waitForSelector(allResultsSelector);
  await page.click(allResultsSelector);

  // Wait for the results page to load and display the results.
  const resultsSelector = '.gsc-results .gsc-thumbnail-inside a.gs-title';
  await page.waitForSelector(resultsSelector);

  // Extract the results from the page.
  const links = await page.evaluate(resultsSelector => {
    const anchors = Array.from(document.querySelectorAll(resultsSelector));
    return anchors.map(anchor => {
      const title = anchor.textContent.split('|')[0].trim();
      return `${title} - ${anchor.href}`;
    });
  }, resultsSelector);
  console.log(links.join('\n'));

  await browser.close();
})();

How can we leverage Puppeteer to fill our needs? We take advantage of page.screenshot function, as shown in the code below. We first set the viewport to reflect the size of our screenshot. Notice that we ask Puppeteer to load the HTML using a data URL. An alternative is to create the file in a temporary folder on disk and point Chrome at it. When loading the content we pass a waitUntil parameter set to load. There are some other options here that look at the network being idle. More information can be found here. Lastly, we take a screenshot. The
omitBackground flag allows us to have transparent backgrounds in our screenshots. The result of the screenshot will be a Node Buffer with base64 encoded data.


async function renderHtml(html, width, height) {
    const browser = await puppeteer.launch();
    const page = await browser.newPage();

    await page.setViewport({ width: width, height: height });
    await page.goto(`data:text/html,${html}`, { waitUntil: 'load' });
    const pageResultBuffer = await page.screenshot({ omitBackground: true, encoding: 'base64' });
    await page.close();
    browser.disconnect();
    return pageResultBuffer;
}

 

Once the buffer is created, we can do just about anything with it. We can send down it down to a bot as an inline PNG URL or we can upload it to a blob store like S3, directing any channel to utilize the image from the blob store. In the rest of this post, we will create a Node server that simply responds to GET requests with the weather graphic above for any city passed through a URL parameter.

There is one more implication of Headless Chrome that we have not yet explicitly spelled out. The HTML we pass can include all manners of SVG, JavaScript, loading of external resources, etc. We can truly take advantage of the various Chrome features and even create an SPA. For our weather graphic use case, we will use a JavaScript charting library to draw the visualization. With all the libraries available out there, we can get into some pretty nifty visualizations.

A Simple Weather Graphic Image Server

We will not walk through the creation of a simple Node server that generates these weather graphics for any city. As Facebook Messenger requires landscape images to have a 1.91:1 aspect ratio, we create a card of that size. We use C3.js, a charting library based on the well-known D3.js document manipulation library. Let us take a look at the card template HTML. Within, we create a basic C3 timeseries chart that includes two x axes: one for the daily high/low data and one for the hourly temperature data. Note that we use placeholders that will be replaced with the actual data that will be used in the chart.


<html>

<head>
    <style>
        body {
            font-family: sans-serif;
            margin: 0;
            padding: 0;
            background: #ffffff;
        }
    </style>
    <script type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/d3/5.5.0/d3.min.js"></script>
    <script type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/c3/0.6.6/c3.min.js"></script>
    <link href="https://cdnjs.cloudflare.com/ajax/libs/c3/0.6.6/c3.min.css" rel="stylesheet" type="text/css">
</head>

<body>
    <div class="card">
        <div id="chart"></div>
    </div>
</body>

<script type="text/javascript">
    var chart = c3.generate({
        size: {
            width: 764,
            height: 400
        },
        data: {
            xFormat: '%Y-%m-%d-%H',
            xs: {
                'Low': 'x1',
                'High': 'x1',
                'Hourly': 'x2',
            },
            columns: [
                ['x1', { X }],
                ['x2', { X2 }],
                ['Low', { LOW }],
                ['High', { HIGH }],
                ['Hourly', { HR }]
            ]
        },
        point: {
            show: false
        },
        grid: {
            y: {
                show: true
            }
        },
        axis: {
            x: {
                type: 'timeseries',
                tick: {
                    count: 12,
                    format: '%Y-%m-%d'
                }
            }
        }
    });
</script>
</html>

As an example, if we were to set the following data in the columns:


columns: [
    ['x1', '2018-07-02-0','2018-07-03-0','2018-07-04-0','2018-07-05-0'],
    ['x2', '2018-07-02-0','2018-07-03-0','2018-07-04-0','2018-07-05-0'],
    ['Low', 63,64,63,62],
    ['High', 74,74,76,85],
    ['Hourly', 67,70,70,78]
]

We would see the following chart:

All that is left is for us to retrieve the data, transform it into the format require by C3.js and we’ll generate the graphic we want.

I found a free trial weather API that we could use for this purpose: World Weather Online. On their web site you can create an account and receive a trial key for 500 API calls a day. With the key in our possessions, we can retrieve data using a URL in this format:

https://api.worldweatheronline.com/premium/v1/past-weather.ashx?key={INSERT_YOUR_KEY_HERE}&q=los%20angeles&format=json&date=2018-07-31&enddate=2018-08-01&tp=1

The parameter tp correspond to the frequency of data points, in this case 1 means we receive hourly data. The q parameter is the name of the city. We can also pass the start and enddate for our requests. The result of the query above modified for brevity is:


{
    "data": {
        "request": [
            {
                "type": "City",
                "query": "Los Angeles, United States of America"
            }
        ],
        "weather": [
            {
                "date": "2018-07-31",
                "astronomy": [
                    {
                        "sunrise": "06:04 AM",
                        "sunset": "07:55 PM",
                        "moonrise": "10:24 PM",
                        "moonset": "09:27 AM",
                        "moon_phase": "Waning Gibbous",
                        "moon_illumination": "83"
                    }
                ],
                "maxtempC": "30",
                "maxtempF": "86",
                "mintempC": "24",
                "mintempF": "76",
                "totalSnow_cm": "0.0",
                "sunHour": "13.0",
                "uvIndex": "0",
                "hourly": [
                    {
                        "time": "0",
                        "tempC": "23",
                        "tempF": "74",
                        "windspeedMiles": "1",
                        "windspeedKmph": "1",
                        "winddirDegree": "193",
                        "winddir16Point": "SSW",
                        "weatherCode": "116",
                        "weatherIconUrl": [
                            {
                                "value": "http://cdn.worldweatheronline.net/images/wsymbols01_png_64/wsymbol_0004_black_low_cloud.png"
                            }
                        ],
                        "weatherDesc": [
                            {
                                "value": "Partly cloudy"
                            }
                        ],
                        "precipMM": "0.0",
                        "humidity": "72",
                        "visibility": "10",
                        "pressure": "1013",
                        "cloudcover": "4",
                        "HeatIndexC": "24",
                        "HeatIndexF": "75",
                        "DewPointC": "18",
                        "DewPointF": "65",
                        "WindChillC": "24",
                        "WindChillF": "75",
                        "WindGustMiles": "4",
                        "WindGustKmph": "6",
                        "FeelsLikeC": "24",
                        "FeelsLikeF": "75"
                    },
                    {
                        "time": "100",
                        "tempC": "23",
                        "tempF": "74",
                        "windspeedMiles": "1",
                        "windspeedKmph": "2",
                        "winddirDegree": "193",
                        "winddir16Point": "SSW",
                        "weatherCode": "116",
                        "weatherIconUrl": [
                            {
                                "value": "http://cdn.worldweatheronline.net/images/wsymbols01_png_64/wsymbol_0004_black_low_cloud.png"
                            }
                        ],
                        "weatherDesc": [
                            {
                                "value": "Partly cloudy"
                            }
                        ],
                        "precipMM": "0.0",
                        "humidity": "73",
                        "visibility": "10",
                        "pressure": "1012",
                        "cloudcover": "4",
                        "HeatIndexC": "24",
                        "HeatIndexF": "75",
                        "DewPointC": "19",
                        "DewPointF": "65",
                        "WindChillC": "24",
                        "WindChillF": "75",
                        "WindGustMiles": "4",
                        "WindGustKmph": "6",
                        "FeelsLikeC": "24",
                        "FeelsLikeF": "75"
                    },
…
}

For every day we have the minimum and maximum temperatures and for every hour we have a temperature. We can use some code to retrieve and parse this into something useful. I used the code below. The API sometimes resulted in a timeout error so I built in retry logic. In effect, we retrieve the last 30 days of data and transform the objects into a format we can easily use.


async function getWeatherData(location) {
    const uri = `https://api.worldweatheronline.com/premium/v1/past-weather.ashx?key=${process.env.WEATHER_KEY}&q=${encodeURIComponent(location)}&format=json&date={start}&enddate={end}&tp=1`;
    const start = moment().add(-30, 'days');
    const end = moment().startOf('day');

    const data = [];
    let done = false;
    let errorCount = 0;
    while (!done) {
        const startStr = start.format('YYYY-MM-DD');
        const endStr = end.format('YYYY-MM-DD');
        const reqUri = uri.replace('{start}', startStr).replace('{end}', endStr);
        console.log(`fetching ${reqUri}`);

        try {
            const rawResponse = await rp({ uri: reqUri, json: true });
            const response = rawResponse.data.weather.map(item => {
                return {
                    date: item.date + '-0',
                    min: item.mintempF,
                    max: item.maxtempF,
                    hourly: item.hourly.map(hr => {
                        let date = moment(item.date);
                        date.hour(parseInt(hr.time) / 100);
                        date.minute(0); date.second(0);
                        return {
                            date: date.format('YYYY-MM-DD-HH'),
                            temp: hr.tempF
                        }
                    })
                };
            });
            response.forEach(item => { data.push(item) });
            done = true;
        } catch (error) {
            errorCount++;
            if (errorCount >= 3) return null;
            console.error('error... retrying');
            await timeout(3 * 1000);
        }
    }

    return data;
}

The last piece of code creates the GET endpoint on our server using restify, retrieves the weather data, populates the template HTML, takes a screenshot using Headless Chrome and responds with the image.


server.get('/api/:location', async (req, res, next) => {
    const location = req.params.location;
    const weatherData = await getWeatherData(location);

    if (weatherData == null) {
        // this means we got some error. we return Internal Server Error
        res.writeHead(500);
        res.end();
        next();
        return;
    }

    const x = weatherData.map(item => "'" + item.date + "'").join(',');
    const low = weatherData.map(item => item.min).join(',');
    const high = weatherData.map(item => item.max).join(',');

    const _x2 = [];
    const _hrs = [];
    weatherData.map(item => item.hourly).forEach(hr => hr.forEach(hri => _x2.push(hri.date)));
    weatherData.map(item => item.hourly).forEach(hr => hr.forEach(hri => _hrs.push(hri.temp)));
    const x2 = _x2.map(d => "'" + d + "'").join(',');
    const hrs = _hrs.join(',');

    let data = fs.readFileSync('cardTemplate.html', 'utf8');
    data = data.replace('{ X }', x);
    data = data.replace('{ LOW }', low);
    data = data.replace('{ HIGH }', high);
    data = data.replace('{ X2 }', x2);
    data = data.replace('{ HR }', hrs);

    const cardData = await renderHtml(data, 764, 400);

    res.writeHead(200, {
        'Content-Type': 'image/png',
        'Content-Length': cardData.length
    });

    res.end(cardData);

    next();
});

 

The result is that we can run the server by running npm start, navigate to a URL like http://localhost:8080/api/Miami and receive the following image.

Not bad for a few minutes of coding! I’ll assume that the low temperatures being higher than the hourly data is either a data quality issue or something I did wrong in the chart.

Conclusion

Clearly there’s more work to be done to take this into a production environment. The result looks somewhat pixelated. We could render a larger image and then resample back down to get a higher quality image. You may have noticed some slowness in rendering; if we are remotely loading JavaScript and CSS resource, we may want to load them from the same computer.

Despite some issues, this is a sound approach and with some fine tuning can result in high quality visualizations for our bot experiences or any other application that need static visualizations.

In the .Net project I referenced earlier, we actually created a standalone ASP.Net Core web app on Azure that called into Puppeteer scripts using ASP.Net Core Node Services. It works very well and performs great. We did not spend too much time optimizing and were able to get performance to around 300ms, which is sufficient for our purposes.

You can find the full code sample on Github.

We dive into this technique in further detail in then context of bots in my book, Practical Bot Development.

Posted by Szymon in Bots

Time to Get Started with Chat and Voice

I have spent the last two and a half years of my career focusing on a technology that, back then, was easily dismissible. So much that others in at my work doubted we could build a successful business around it. At the time, chat bots had gained a somewhat notorious reputation for underwhelming users because of the bots’ limitations. From a technology implementation perspective, what was a clear attempt at providing narrow, but useful, conversational experiences became a target of Turing completeness ridicule. No way this is AI, they said. It was fair, but very misplaced criticism. In the past two years, chat bots have been gaining steam across the consumer and enterprise space. Bots are filling a real need.

Users who have a smartphone love their messaging apps. Look at the average user’s phone and you will find apps the likes of WhatsApp, WeChat, Snapchat, Facebook Messenger and so on. You know what you will not find? A mobile app for a local mechanic or a local flower shop. Users, millennials especially, heavily prefer messaging to calling. Messaging is convenient and, of importance, asynchronous. If we interact with friends using messaging apps, why should we interact with businesses any differently? The writing is on the wall and companies from Facebook to Twitter and Apple are on board.

Of equal relevance are digital assistants like Alexa, Cortana and Google Assistant. As these become more and more integrated with our daily activities, our expectations around communicating with computer agents using natural language become more ingrained. I just attended the VOICE 2018 conference in Newark, NJ. The stories shared around our interactions with voice assistants resonated, especially as they reflect real usage in our homes. For instance, children love Alexa. They love asking her all kinds of questions, watching fun videos and, most recently, playing games by using gadgets like Echo Buttons. Nursing homes and the elderly stand to benefit as well; there is something human about being able to speak to Alexa at any time, especially for those living alone. For everyone in between, it acts as an appointment assistant, a task tracker or a glorified kitchen timer. As we become accustomed to these voice interactions, expecting the same level of natural language comprehension with all kinds of computer agents will become second nature.

As one would expect, there is significant overlap between the technologies powering both chat bots and voice experiences. At the end of the day, a conversational experience is composed of a per-user state machine. An incoming user message gets distilled into an intent and an optional set of entities. Given a user’s state, incoming intent and entities, the state machine takes the three pieces as input and transitions the user to the next state. For example, if I begin a conversation with a bot I may be in a Begin state. If I say, What is the current weather?, the state machine would transition me to the CurrentWeather state, in which the right business logic to fetch the weather and generate a response would be executed. The collection of all these state transitions is the conversation. Natural Language Understanding (NLU) technologies such as Microsoft’s LUIS, Rasa NLU and Google’s Dialogflow, among many others are the Narrow AI behind conversational experiences. There are also many options for developing the conversation engine that powers the state machine, such as Microsoft’s Bot Framework, Google’s Dialogflow, Amazon’s Lex, Watson Assistant and many others. Once we have an NLU system and a conversation engine, our last task is to build the business logic to provide responses to the combination of users’ context, and their input intents and entities.

The process of building voice and chat bots is very similar across the different tools. Many approaches leave the NLU and conversation engine piece in the cloud and only call into your business logic as necessary. In my book, Practical Bot Development: Designing and Building Bots with Node.js and Microsoft Bot Framework, I make the explicit choice of using Microsoft’s Bot Framework, one of more flexible options in the marker that my team has used across more than a dozen production bots. Microsoft’s approach allows developers the flexibility to implement their own conversation engine logic, and thus, is a great teaching tool. In the book, we make the journey from developing simple bots connected to Facebook Messenger to powering a Twilio phone conversation or Alexa skill using the same technology. We integrate with Google’s OAuth and connect a chat bot to Google’s Calendar API. We discuss the ins and outs of NLU using LUIS, Adaptive Cards, dynamic graphics generation, human handover, bot analytics and many other topics. The goal is to excite and equip developers with the skills to build fun and impactful conversational experiences!

This is where it gets interesting; once we have the skills to build conversational experiences, what then? The truth is that this is still a new space and we are learning about what it takes to build a truly engaging chat bot or voice skill. So much that the technology to build these experiences is evolving at a breakneck pace. Although frustrating when writing a book, this should excite you! We know so little about this new way of interacting that the platforms are constantly improving ways in which we communicate with users. The space needs innovators and forward-looking developers willing to showcase new and experimental applications that make users’ lives easier. There is no better time to jump into this space that now. Join us!

Posted by Szymon in Bots