14–17 Oct 2019

Implementing an AI multicloud broker

Holger Kyas (Open Group, Helvetia Insurances, University of Applied Sciences)
13:4514:25 Thursday, 17 October 2019
Location: Westminster Suite
Average rating: *....
(1.00, 1 rating)

Who is this presentation for?

  • Software engineers, architects, and scientists




The AI multicloud broker is triggered by Amazon Alexa and mediates between AWS Comprehend (Amazon), Azure Text Analytics (Microsoft), GCP Natural Language (Google), and Watson Tone Analyzer (IBM) to compare and analyze sentiment. The extended AI part generates new sentences (e.g., marketing slogans) with an RNN using long short-term memory (LSTM). Finding those sentences with very positive sentiment is the primary goal.

The idea of the AI multicloud broker evolved from a prototype built in 2017 to integrate Amazon Alexa with IBM Watson. This was a lot of fun, though through some API changes in the cloud it came back to me as a surprise. Nevertheless, the idea to extend this scenario with stronger AI capabilities was born. After the ambiguous messages around the possibility to capture free speech with Amazon Alexa and its usage for marketing, I concentrated on the technical possibilities of doing so within multicloud scenarios.

As I built the prototype in 2017, integrating Alexa with cognitive services from IBM to analyze emotions from the Big 5 model the way to do this was tricky. The AWS API supported the type AMAZON.Literal to capture free text beyond given intents. But it was deprecated once in 2017, reactivated again due to developer protests and finally deprecated in October 2018. So any app using the AMAZON.Literal type didn’t work anymore and had to be migrated. Welcome to the cloud…even though this happens on-premises as well.

The recommendation was to migrate to custom slot types, but that wasn’t as easy as it sounded. Free speech wasn’t captured the same way it had been before, and the documentation didn’t explain why. So searching for options to make that work, I found another built-in type, namely AMAZON.SearchQuery. The specification in the section “samples” may look like this: “Capture {Query}”. So when your skill has been invoked by, for example, saying “tone analyzer,” if you say “capture” and then “I feel happy” the captured {Query} is “I feel happy.” So this seems to work, but as always, not easy going to find out if you have many other things to manage in your life.

Why capture free speech out of the Alexa API anyway? To engineer the multicloud scenario where you can consume the free speech via a broker component that can call API services from different clouds. For sentiment this would be AWS Comprehend (Amazon), Azure Text Analytics (Microsoft), GCP Natural Language (Google), and Watson Tone Analyzer (IBM).

Obviously, the AWS Comprehend service can be called directly by the Alexa Skill, but it’s interesting to see the comparison between those calls from the broker. So the cloud broker component is handling the incoming and outgoing service calls from and to Alexa. But it also manages the service calls to the four different cloud services.

My experiences show that there are some major differences and it feels proprietary.

After overcoming those the core of the AI part was the generation of possible new sentences, like marketing slogans. For this, my first choice was an RNN using LSTM as neurons. This works well so far, but the more difficult part is to generate sentences with perfect sense. The area of creative AI is thrilling, but guidance of how to inject domain context in a way that generated assets fit well is just about to evolve.

So: Productivity for the AI multicloud broker is realistic after mastering some hurdles.

Prerequisite knowledge

  • A working knowledge of RESTFUL APIs, cognitive cloud services, and neural networks (useful but not required)

What you'll learn

  • Understand how the AI cloud broker has to deal with proprietary APIs despite standards like HTTP-REST or JSON
  • Learn how productivity for the AI multicloud broker is realistic, after mastering some hurdles
Photo of Holger Kyas

Holger Kyas

Open Group, Helvetia Insurances, University of Applied Sciences

Holger Kyas is Open Group Board Member for OpenCA Architecture Certifications, Enterprise Architect at Helvetia Insurances and Adjunct Professor at the University of Applied Sciences Bern in Switzerland. He has presented at international conferences like “IBM World of Watson” or “Insurance AI and Analytics Europe.”

Comments on this page are now closed.


Picture of Holger Kyas
Holger Kyas | Open Group Board Member, Enterprise Architect, Adjunct Professor
8/10/2019 9:32 BST

@Valerie: you’re welcome. Looking forward…

Valerie L Parry |
15/09/2019 16:24 BST

It’s really a nice and useful piece of info. I am satisfied that you simply shared this
helpful info with us. Please stay us up to date like this.
Thank you for sharing.

  • Intel AI
  • O'Reilly
  • Amazon Web Services
  • IBM Watson
  • Dell Technologies
  • Hewlett Packard Enterprise
  • AXA

Contact us


For conference registration information and customer service


For more information on community discounts and trade opportunities with O’Reilly conferences


For information on exhibiting or sponsoring a conference


For media/analyst press inquires