DTNS 3277 – Microsoft Edges Toward IoT and AI

We examine all info-packed Monday announcements from this year’s Microsoft BUILD event. Plus Robocalls are still a thing and unfortunately they’re getting worse and Spotify is not a publicly traded company.

Starring Tom Merritt, Sarah Lane, Roger Chang and Lamarr Wilson.

MP3

Using a Screen Reader? Click here

Multiple versions (ogg, video etc.) from Archive.org.

Please SUBSCRIBE HERE.

Subscribe through Apple Podcasts.

Follow us on Soundcloud.

A special thanks to all our supporters–without you, none of this would be possible.

If you are willing to support the show or give as little as 5 cents a day on Patreon. Thank you!

Big thanks to Dan Lueders for the headlines music and Martin Bell for the opening theme!

Big thanks to Mustafa A. from thepolarcat.com for the logo!

Thanks to Anthony Lemos of Ritual Misery for the expanded show notes!

Thanks to our mods, Kylde, Jack_Shid, KAPT_Kipper, and scottierowland on the subreddit

Show Notes
To read the show notes in a separate page click here!

2 thoughts on “DTNS 3277 – Microsoft Edges Toward IoT and AI

  1. About the unverified story that the prototype autonomous car that hit and killed a jaywalking pedestrian was set to a high treshold to avoid false positive detection of obstacles.

    Here’s what I think is weird about that story (so maybe the story isn’t true).

    In order to have autonomous driving you need a very capable system for visual recognition.

    For comparison: some time ago Google demonstrated a trained AI, that when presented with any picture could tell whether there was a dog in the picture, and if so it could tell the breed of the dog.

    An autonomous car must have similar capability, but far more versatile. First and foremost the system must be able to tell – in a fraction of a second – whether its looking at a human (walking, cycling), or something else, and in the category ‘something else’ the system must be able to assess whether it needs to avoid collision (and that would be a point where tuning may come in).

    But whatever you do, you need to prioritize recognition of a human being.

    According to the story the prototype autonomous system has a single tuning entry, governing the entire decision between ‘avoid collision if possible’ or ‘false positive’.

    If true that would mean that that prototype autonomous system does not prioritize recognition of a human being. If that would be true then the Uber prototype autonomous system would be fundamentally flawed, and a deep redesign would be needed.

    1. It was not unverified, it was anonymous. Multiple outlets confirmed it to their satisfaction but were not at liberty to name the source.

      Aslo the story did not say that collision avoidance was entirely based on that one function but that the failure likely occurred because of the function’s sensitivity level.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.