cognitive mobility and IoT cars

Some of the things I’ve collaborated to make:

Joe Speed IoT auto innovations

  • #AccessibleOlli crowdsourced cognitive self-driving transportation for aging & disabled. This is being developed in partnership with Local Motors, IBM Accessibility, CTA Foundation and many others including MIT and Princeton. It has been accepted for AI XPRIZE and featured in MIT Tech Review#AccessibleOlli
  • Olli cognitive self-driving shuttle. Brought Local Motors and IBM together and lead this co-creation of future of cognitive mobility.
  • Cognitive Mobility prototypes for automakers. Combining Watson IoT for Automotive, Watson, Weather, Twitter, Bluemix, Twilio, Mapbox and other technologies into an in-vehicle cognitive experiences.
  • “IoT Auto” 3D Printed advanced technology vehicle with Local Motors, Octoblu’s Meshblu, Link Motion instrument cluster & infotainment on NXP i.MX with ARM TrustZone, and IBM technologies including IoT Foundation, Bluemix, MobileFirst (Worklight), Streams, NodeRED, MQTT. Exhibited at IBM Insight `15 and SEMA `15.
    Screen Shot 2015-11-11 at 1.12.18 PM
  • UNLV Robotics DARPA DRC-HUBO robot driving Local Motors 3D printed car into IBM Insight `15 Expo. Was excited to helped these things come together and get UNLV Robotics the exposure they deserve and help training HUBO to drive his car 🙂
  • aftermarket”car compute” on-ramp to IBM IoT + Bluemix cloud. Connects cars to our IBM cloud in seconds.
  • Continental’s prototype IoT+Big Data Connected Car cloud on Intel powered Softlayer cloud for V2V and “highly automated” driving.  Uses MQTT, Eclipse Paho, MessageSight (Intel Inside) IoT appliance, Streams, NodeRed and other components.  Here is German Chancellor Angela Merkel reviewing it.
  • Open IoT show car with Intel, IBM, Local Motors.  430hp crowdsourced open source car with MQTT, Intel Quark / Galileo, running Node-RED (Node.js for IoT) car controller & Intel Atom IVI system Automotive Grade Linux (Tizen IVI) and MobileFirst UX, connected to IBM Softlayer Connected Car cloud with IBM MessageSight, Big Data, Predictive Maintenance, Node-RED, MobileFirst Worklight.
  • Collaborated with Jaguar Land Rover developing LR4 and supercharged F-Type IoT show cars live demo’d at IBM Connect 2014, Tizen Dev Conf, IBM Innovate 2014.  Solution used MQTT, AGL Tizen IVI, MessageSight, Streams, Node-RED, Predictive Maintenance, MobileFirst, Softlayer cloud.
  • Led collaboration with QNX creating IoT car shown at IOD 2013, CES 2014, Pulse 2014.  Solution uses MQTT, IBM Big Data, Streams (CEP), Predictive Maintenance, Softlayer Cloud, MobileFirst, QNX, Texas Instruments, Jeep Renegade.
  • Raspberry Pi powered IoT show car w Local Motors, Citrix Octoblu, and Linux Foundation.  Planned 3D printed IoT enabled show car.
  • Drove creation and co-designed IBM IoT & Connected Car showcase on Softlayer cloud used for IBM’s demos, proof of concepts, hackathons and developer outreach.  Uses MQTT, Eclipse Paho, Bluemix (IBM’s Cloud Foundry), MobileFirst, Node-RED, MessageSight, Big Data, Streams, Predictive Maintenance, Softlayer cloud.
  • won AFCEA 2013 48 hour hackathon for my team’s IoT alerting & response protype using open source MQTT, IoT appliance, vehicle telematics and mobiles. We’d been given a scenario of dirty bomb incident in downtown San Diego.
  • Worked with car2go and Daimler IT starting in 2012 to get IoT MQTT into all car2go, car2go EV and car2go black Mercedes vehicles. Daimler moovel‘s car2go was the first to build open IoT into vehicles.
    Joe Speed car2go

Joe Speed’s IoT, Connected Car and AI Mobility Keynotes, Panels and Press

Joe Speed’s keynotes, panels and speaking engagements

Joe Speed’s interviews, press and analyst mentions

Older press and keynotes

IoT MQTT handy links

here is some info I gave an automaker tonight, thought it might help others:

Publishing Tizen IVI CANBus Data to the Cloud

Some practical tips to have Tizen IVI publish CAN bus to cloud via MQTT

Bridging Things and Services

I was asked to look into how to easily publish CANBus data using MQTT to the cloud in the context of Tizen IVI. It turned out to be quite interesting experience to understand how different layers (AMB, dBus, WRT IVI plugin) in Tizen are working together. The goal is a quick prototype to make everything work end to end.

View original post 408 more words

thoughts about using IoT MQTT for V2V and Connected Car from CES 2014

Had a great time at CES 2014 showing my IoT tech with Continental AG in the Renaissance Hotel, Jaguar Land Rover + Intel at GENIVI Show Trump Tower, QNX + Qualcomm in North Hall LVCC, and IBM in the Venetian.  Really enjoyed CES, GENIVI Show, Tao, and especially Firefly on Paradise with my guys.  Thanks!

IMG_8029 IMG_8080 IMG_8025 IMG_8075

Spent 4 days with dozens of automakers and car system providers, fielding questions and showing live demos.  Based on those discussions I’ve pulled together information I hope is useful.  Or if not useful, at least interesting:

  • thoughts
  • rants
  • slides
  • monologue
  • MQTT vs HTTPS on mobiles – 93x faster, 11x less power to send, 170x less power to receive, 8x less network overhead.  QoS and LWT for reliable comms on unreliable networks.
  • IBM MessageSight – secure IoT appliance and core component of global IBM IoT Cloud on Softlayer.   Concurrently connects 21M vehicles & mobiles at up to 340.2M telematics messages/second per rack.  Low latency, 60μs device-to-device MQTT messaging on fast network (i.e. 10GigE) under load.
  • video of AT&T iPhone & Verizon iPad remote control of GENIVI Tizen HVAC, note the low latency  Scottsdale > Dallas cloud > car in MA changes temp > Dallas > Scottsdale 
  • demos, how-to examples and source code 
  • demo video using QNX TI OMAP5 car
  • another demo or even better is this one
    note the data path for what you’re seeing in the 2 side-by-side HTML5 apps is Verizon iPad Scottsdale AZ > Dallas Softlayer cloud > GENIVI Tizen car unit in Dartmouth MA > Dallas Softlayer cloud > AT&T iPhone Scottsdale AZ
  • try this MQTT HTML5 whiteboard demo on several mobiles side-by-side, and observe the latency.  This is running against  Softlayer’s Dallas cloud in North America.  Let me know if you need the URLs for this in Asia, Europe or elsewhere.
  • MQTT + MQTT-SN simplifies V2V / V2X. MQTT for V2V via cloud 10s-100s ms, MQTT-SN for V2V via DSRC, ZigBee, 6LoWPAN, serial, UDP, et al.  And Eclipse Mosquitto RSMB 1.3 is 75KB MQTT + MQTT-SN broker for embedded that bridges V2V mesh to cloud.  I think it wouldn’t be an exaggeration to say that RSMB is battle hardened.
  • MQTT/MQTT-SN for transport independent V2V/V2X via dynamic mesh of DSRC/WAVE, Wi-Fi, LTE-A, cloud a’la ITA’s MQTT Sensor Fabric
  • Yes, the cloud is  too slow to tell me the car ahead hit their brakes.  For that you need 20ms latency.  But you know what else besides DSRC can do that?  Cameras, radars, lasers, many things.  And DSRC doesn’t help  with what’s around the next bend, what’s over the next hill.  It is very localized.  Which is an issue because there is a whole class of V2V problems that require macro-level awareness and real-time decisioning for which higher latency (100ms+) is acceptable.  Continental is public example of someone that tackling those using IoT + real-time Big Data geospatial decisioning in the cloud.
  • Yes, not every car is connected nor will they be in the near term, so what about the blind spots?  There are things that help mitigate that.  For example if a car senses other cars and publishes that data to the cloud, then the cloud knows about those cars even though they’re not connected.   IoT MQTT cloud connectivity is software, just requires is a programmable TCU.  Ford already made public that they’re pushing an OTA update to 3.4M existing cars adding more connected car capabilities, so many existing models can enabled for V2V cloud, not just future ones.  Automakers are launching aftermarket OBD2 dongles such as this one w 3G/GPS/accelerometers/etc. are $100 and less at retail ($20 in volume  for an OEM?) that cloud connect via MQTT any car built after 1995.
  • Yes, it is an issue there that each OEM is having their own cloud.  Solutions include having them use 1 cloud, for example what Continental AG is doing with having IBM give them the global connected car cloud which will be used by multiple OEMs that they supply.  Can also tackle it via federated car clouds, interconnecting them and exchanging data in real-time.  While not trivial, federated clouds and federated ESBs is fairly well traveled ground.  The technology is there.  MQTT even has a bridging convention whereby an MQTT broker (server) can also be a client.  The same approach you use to have an MQTT broker like RSMB in the car bridging a local V2V mesh to the cloud can also be used to bridge one cloud to another, effectively interconnecting them.  Yes that requires agreement re data format (preferably protobuf or JSON) and topic space, but those are solvable.
  • I’ve nothing against DSRC, in fact I’m very excited to have MQTT-SN running atop DSRC/WAVE, I think the pub/sub topics and QoS adds to DSRC’s usefulness.  That said, DSRC is early 90’s RF tech that is beginning to show its age.  And many of the use cases DSRC was intended for in V2V are becoming obviated by other sensor and wireless tech.  I don’t need DSRC to tell me when the car ahead hits their brakes, the car is already getting several other sensors that can do that.  And nobody has figured out where the $3 trillion dollars is going to come from to get DSRC radios into all the cars and infrastructure.  And we’re talking about adding that DSRC cost to cars that are already getting LTE Advanced and/or WiFi.  You know what else besides DSRC can do peer 2 peer?  LTE Advanced and WiFi.  Credit for this  to Roger Lanctot @rogermud who clued me into this.  He’s also the one that pointed out to me that latency is a driver distraction issue.
  • I already knew latency is an owner satisfaction and brand perception issue.  She’s not going to buy your car if her buying experience includes making her standing in the rain for 30 to 90 seconds while pondering her request to unlock the doors.
  • MQTT client source for C, C++, Java, JavaScript, Python, Lua
  • MQTT + MQTT-SN bridging broker for embedded systems. 
  • MQTT-SN “MQTT for Sensor Networks” is designed for WSNs and mesh networks using datagrams instead of socket.  It works over DSRC, ZigBee, 6LoWPAN, LTE Advanced p2p, UDP, et al.  Like MQTT it is open standard, open source and royalty free.  Some think it can greatly improve V2V / V2X by making it publish/subscribe based and transport independent (like the military’s).  The meshing capabilities and ability to propogate topics and messages across the mesh w QoS is really quite amazing.   I’m trying to get permission to youtube the videos form the field trials.
  • MQTT Sensor Fabric secure WSN mesh used by US and UK military uses RSMB which is now open source per above.
  • MQTT 3.1 spec
  • MQTT 3.1.1 spec draft
  • MQTT-SN 1.2 spec 
  • Q:  Is MQTT security better or worse than HTTPS?  A:  Neither.  It is the same.  They are both socket protocols that rely on TLS/SSL connection level encryption.  For example to the cloud I often suggest TLS 1.2 w mutual authentication using “signed at the foundry” keys built into the chipset.

IoT Cloud enable GENIVI, Tizen, AGL in 5 minutes via MQTT

While helping one of my customers, I asked Ian Craggs @icraggs to prototype an MQTT+D-Bus bridge to cloud enable GENIVI, Tizen and Automotive Grade Linux.  Ian surprised me with a command line solution that MQTT enables them  in 5 minutes.  I expect we’ll still develop a bridge component for GENIVI but this is a trivially easy solution folks can do now.

You can test it against this MQTT MessageSight instance ip= port=1883 in the cloud and use the MQTT Client web app on to test your GENIVI/Tizen/AGL publishing and subscribing.

Ian’s quick and easy solution as follows:

I have been experimenting with getting information from D-Bus to MQTT and back, and have come up with the following simple methods, using the Paho C MQTT client, available from  source repository

These methods do no transformation or extraction of information at the moment, but it is easy to change them to get the exact information we need.  Any questions, please ask.

D-Bus to MQTT

There is a Linux program called dbus-monitor which will listen for events and print them to stdout.  Using the stdinpub sample of the Paho C client (src/samples directory), raw Linux desktop notification messages can be published to an MQTT server by using a command like this:

dbus-monitor –session interface=org.freedesktop.Notifications | stdinpub Notifications –delimiter -1 –verbose –maxdatalen 1000

You can test this using the command:

notify-send summary body

The raw output looks like this (you can see where the “summary” and “body” fields end up) :

signal sender=:1.78232 -> dest=:1.273 serial=122 path=/org/freedesktop/Notifications; interface=org.freedesktop.Notifications; member=NotificationClosed
uint32 13
uint32 1
method call sender=:1.81115 -> dest=org.freedesktop.Notifications serial=5 path=/org/freedesktop/Notifications; interface=org.freedesktop.Notifications; member=GetServerInformation
method call sender=:1.81115 -> dest=org.freedesktop.Notifications serial=6 path=/org/freedesktop/Notifications; interface=org.freedesktop.Notifications; member=Notify
string “notify-send”
uint32 0
string “”
string “summary”
string “body”
array [
array [
dict entry(
string “urgency”
variant             byte 1
dict entry(
string “category”
variant             string “”
int32 -1

We can easily extract just the relevant data from the events we are interested in.

MQTT to D-Bus

Using another program called notify-send and another sample stdoutsub (available from Paho again), we can use MQTT to create desktop events via D-Bus:

stdoutsub notify –delimiter newline | xargs -I %body –delimiter \\n notify-send summary %body

We can use dbus-send instead of notify-send to create events on any D-Bus interface.

protobuf + MQTT is yummy fast goodness

MQTT is very fast, very efficient.  Payload size & speed matters too though.  XML is too heavy and slow for mobile.  JSON is much better.  But for absolutely smallest wire size and fastest serialization you need binary.  The most obvious and mature solution is Google Protocol Buffer aka “protobuf”.

Benchmarks of JSON vs protobuff vary but most say protobuf objects are 2x smaller and serialization/deserialization is 2.5 – 4x faster.

I have “quite large” projects going on using the protobuf+MQTT combo in situations where the requirement is pushing events from 100s of sensors over 1 mobile connection and/or optimizing for lowest latency.

JSON and protobuf have their pros and cons.  So be sure you make the right choice depending on your requirements.  My suggestion for a good rule of thumb is protobuf for things, JSON for people i.e. mobile apps, HTML5 since UX dev tools all support JSON.

There are (prototype?) protobuf JavaScript implementations for example ProtoBuf.js if you want to use it in HTML5 apps.  No warranty expressed nor implied since I’ve not yet tested those.

Commentary and benchmarks:

Some thoughts from others:


·       human readable/editable

·       can be parsed without knowing schema in advance

·       excellent browser support

·       less verbose than XML (relatively small on the wire)


·       very dense data (very small on the wire)

·       hard to robustly decode without knowing the schema (data format is internally ambiguous, and needs schema to clarify)

·       very fast processing

·       not intended for human eyes (dense binary)

Protocol buffers

Protocol buffers are binary and quite compact. You can immediately use the data you’ve received by just pointing to the right portions of a buffer (“zero-copy”), after a very light parsing. If you pass a lot of numbers, it may be quite noticeable. Zero-copy makes most sense for C or C++ code, though; in Java, Python, etc numbers and strings will probably change representation to fit the language.


JSON is a textual format. It is easy to parse efficiently, and you can even have zero-copy strings, but everything else should be parsed. JSON may become wasteful if you pass a lot of mappings with the same keys ([{“foo”:1, “bar”:2}, {“foo”:3, “bar”:4}]), but compression before transmission (mod_gzip, etc) may eliminate this problem.


XML has a very wasteful representation, but it has well-established verification and transformation tools. E.g. using XML Schema, one can express and check quite complex constraints. It also has a standard transformation language, XSLT, nicely homoiconic, but its syntax is totally not intended for humans.

My take

If I were to implement two communicating services, I’d take JSON.

JSON is the simplest of the three. JSON is easy to check by eye and write by hand. This is very important during debugging (and distributed systems rarely work right the first time).

I might consider protocol buffers if at least one end of the communication was performance-critical (presumably written in Java, or Go, or some other high-performance language) and I saw a clear performance problem in JSON-related code. Unless you have one Java backend communicatind with hundreds of Perl frontends, and a few microseconds of latency are important for you, I don’t think you will see significant performance difference between JSON and protocol buffers.

Unless there was a damn good reason, I won’t consider XML at all.

There are good benchmarks from the Java community on serialization/deserialization and wire size of these technologies:

In general, JSON has slightly larger wire size and slightly worse DeSer, but wins in ubiquity and the ability to interpret it easily without the source IDL.

protocol buffers is designed for the wire:

1.     very small message size – one aspect is very efficient variable sized integer representation.

2.     Very fast decoding – it is a binary protocol.

3.     protobuf generates super efficient C++ for encoding and decoding the messages — hint: if you encode all var-integers or static sized items into it it will encode and decode at deterministic speed.

4.     It offers a VERY rich data model — efficiently encoding very complex data structures.

JSON is just text and it needs to be parsed. hint: enoding a “billion” int into it would take quite a lot of characters: Billion = 12 char’s (long scale), in binary it fits in a uint32_t Now what about trying to encode a double ? that would be FAR FAR worse.

Latency is a Driver Distraction issue

Latency is a Driver Distraction issue. Faster communications, faster decisioning saves lives.  There are many automakers and tier-1 suppliers working with me on that. Obviously there are also the other values it brings such as having a better driver experience and a better owner experience. For example being able from your smartphone to find my car, unlock my car, etc. with “key fob response time”, i.e. sub-second, faster than you lift your finger off the button.  Which is more impressive after you learn that the connected car systems out there today are typically yielding 15 to 90 second response times.  One luxury German brand for example, when you click the smartphone app to “find car” brings up a dialogue saying “Finding your car.  This may take a few minutes.”  And it does.

Somewhere you can see the alternative is the faster connected car driver experience that Sprint Velocity is showing at the upcoming LA Auto Show.


REST is for sleeping. MQTT is for Mobile

REST is designed around a simple request/response model.  So you ask “did my account balance change” and the response is returned “no it did not“.  So you check again a few minutes later, and get the same response.  Sound like a silly example?  Actually we’ve learned that it is a very real issue that many customers obsessively check throughout the day, as many as 60 times inflicting a load on back-ends that they weren’t designed for.

Compounding the issue for mobile apps is REST via HTTP which wasn’t designed to work on mobile networks.  HTTP on mobiles is a bit heavy, fragile and slow and drains batteries quickly.

So notifications, isn’t that what Google and Apple Push are for?  Well sure, up to a point.  But there are some serious issues there.  They offer no quality of service, really don’t have much in the way of guaranteed messaging. The practical result customers see is that notifications arrive quickly, late or not at all.  And it is the not at all that is particularly troubling because there is  no way for the sending party to know whether it was delivered.  I hear a lot of frustrations from customers over this with one telling me “Sure, it is great to find out there is a new level of Angry Birds but it isn’t anything I can run my business on“.

So isn’t there smartphone technology that solves this nicely?  Not really.  But there is an obscure machine-to-machine protocol that does.  Andy Stanford-Clark and Arlen Nipper invented MQTT to solve a problem they had: how to do reliable messaging over unreliable networks?  In an industrial environment with computationally”challenged” devices, with restricted power due to solar or battery power, on extremely low bandwidth and often brittle RF communications including satellite.  It had to work reliably, it had to use very few computational cycles, consume trivial power and could not hog what little bandwidth there was.  And it had to be pub/sub so to break the cycle of violence being inflicted by heavy unnecessary workloads and bandwidth consumption due to all the request/response polling-based monitoring & control systems that were in place at that time.

So Arlen & Andy developed a very simple, extremely efficient publish/subscribe reliable messaging protocol and named it MQ Telemetry Transport (MQTT).   A protocol that enabled devices to open a connection, keep it open using very little power and receive events or commands with as little as 2 bytes of overhead.  A protocol that has the things built in that you need for reliable behavior in an unreliable or intermittently connected wireless environments.  Things such as “last will & testament” so all apps know immediately if a client disconnects ungracefully, “retained message” so any user re-connecting immediately gets the very latest business information, etc.

Fast forward a few years to Facebook’s ambition to create a communications platform for hundreds of millions of people.  A platform that would have to have a dramatically better, more responsive user experience than what others were providing.  Lucy Zhang, the engineer in charge was experienced enough to know that the 3 key issues were going to be:

  • latency – how to get faster phone-to-phone communications
  • battery – and do that without killing batteries
  • bandwidth – or sucking up the user’s available bandwidth

While chatting over drinks in Barcelona, a former Facebook  employee told me that the brilliant bit of lateral thinking that Lucy did was instead of trying to brute force the problem with HTTP or HTTP-based protocols like XMPP (which inherit all of HTTP’s issues on mobile), she adopted an obscure m2m protocol “MQTT”

With the result being that Facebook gained a communications platform with a compelling user experience (well, good design had something to do with it too) that garned great reviews, tremendous stickiness, and  today has 680M mobile users, projected to hit 1B within a year.

Verizon Wireless this spring had their engineers do an engineering assessment of the top applications on their network.  Their assessment was that Facebook Messenger using MQTT is 5 stars for security, 5 stars for battery, and 4 stars for bandwidth.  Which is impressive considering the bandwidth measure is compared to things like Angry Birds which use zero bandwidth.

Stephen Nicholas did a fascinating apples to apples comparison of MQTT vs HTTPS on Android, 3G and WiFi which you can read here.  The 3G results are quite interesting:

  • 93x faster throughput
  • 11.89x less battery to send
  • 170.9x less battery to receive
  • 1/2 as much power to keep connection open
  • 8x less network overhead

So if you follow the NY Times or any of the IT trade magazines you may have seen a lot of news about MQTT in the past 2 weeks.  It is now in progress to become an OASIS M2M standard.  And IBM has open sourced all of its MQTT source code via including the new HTML5 MQTT over WebSocket JavaScript which enables MQTT in any HTML5 container including mobile browsers, desktop browsers, vehicle infotainment, consumer electronics.  I’m doing 13,000 messages a second on my iPad with a pure HTML5 app developed in Worklight.  I don’t think you could do that with HTTP.  You can also google for “MA9B” to download zip from IBM of source code for a 1/2 dozen different mobile clients.

MQTT is small footprint, efficient, low power on the device.  So what happens when you flip that around to the data center or mobile operator side?  And drive it with new acceleration technology from IBM Labs?  You get something very dense, very green, and very fast.  As in concurrently connect as many mobiles, cars, or devices as 1,000 web servers with just 1 rack.  Product Management is OK when I say 21M concurrently connected things per rack.  They frown when I say the actual number.  And messaging throughput?  With the Beta firmware we were getting 273M mobile messages/sec per rack.  The performance  results we see in the GA firmware are “higher” and will be published soon.  End-to-end, app-to-app on fast network with this appliance using MQTT is in µs not milliseconds.  That’s why we’re now using this appliance and MQTT for high-speed Big Data analytics, driving millions of low-latency analytics decisions per second.

More on that at with videos here and here.

I’m quite fanatical about this notion that response time = revenue, response time = business performance, and most importantly response time saves lives.  It isn’t about machines anymore folks, “Connected Life” is you, me and everything around us and how it all interacts.  And anytime there is a human in that interaction, then latency matters very much.

Well, that’s probably enough of a diatribe for one day.  I have an IBM internal blog which is “mobile musings”, my intention here was “mobile bit” i.e. bitten, smitten but also a small piece, short.  I’ll hew closer to that in the future.

– Joe