|"I'm sorry, Dave. I'm afraid I can't do that."|
"I'm sorry, Dave. I didn't get that."
Has Siri ever interrupted you out of the blue? I found Siri's unprompted statements unnerving, so I turned off both the "Hey, Siri" and Raise to Talk settings soon after they were released. I was admittedly a bit paranoid that Apple was listening to everything I said, even when I didn't initiate the interaction. Of course, my worries were pretty minor and nothing Siri overheard was major or life threatening. (I wasn't interacting with Hal telling me, "I'm sorry, Dave. I'm afraid I can't do that," but Siri's similar response was creepy nonetheless.)
After spending some time reflecting on the current and future status of personal assistants (bots), I discovered it can quickly become a rabbit hole of further questions. According to Will Oremus in a recent Slate article, Terrifyingly Convenient, we need to get used to more of our interactions with technology through bots. He tells of an experience similar to my Siri story when Alexa, his Amazon Echo, interrupted a private conversation he and his wife were having by interjecting a joke. Oremus points out that:
"Even as many of us continue to treat these bots as toys and novelties, they are on their way to becoming our primary gateways to all sorts of goods, services, and information, both public and personal. When that happens... Alexa and virtual agents like it will be the prisms through which we interact with the online world."
Facebook is currently testing out their own personal assistant for users called M, who will soon interject suggestions and comments in our interactions and posts. Note that I just used the word "who" instead of "it". That's pretty telling. Oremus writes that:
"Once we perceive a virtual assistant as human, or at least humanoid, it becomes an entity with which we can establish humanlike relations... What’s most important, from the perspective of the companies behind this technology, is that we trust it."
I read with great interest as Oremus went on to explain that as these bots get better and do more for us, web search and browser use will fade, and so will mobile apps. Just imagine that for a moment. This world we have just gotten used to with the Internet at our fingertips is about to change yet again. Someday we'll tell our grandkids how we used to have to go to a search engine in a web browser to find an answer. Can you hear them ask,
"What's a search engine and browser, Grandpa?"
As I think of the implications of this for schools, I wonder how much longer we will teach web literacy for the current systems we use. Until I read about the move to bots, I hadn't considered that web browsers and apps would even be going away (I didn't have them listed in my Ed Tech Graveyard). What else will soon be there, or perhaps already is? (Cursive writing? The Dewey decimal system card catalogs? Keyboarding? Using a mouse? Hmm...)
As these bots are programmed, they need to have a source for their information. Amazon's Alexa defaults to NPR for news and Wikipedia for information. There is a way to change this, but it's not clearly stated and most users will likely just accept it. Oremus states that,
"The problem is that conversational interfaces don’t lend themselves to the sort of open flow of information we’ve become accustomed to in the Google era. By necessity they limit our choices—because their function is to make choices on our behalf."
Bot Literacy will be the new Web Literacy.
So how much will you blindly trust the programming and company controlling your bots? I believe that students should understand the background behind the information that their bots provide to them- how it is selected and what is not. The fundamentals of understanding web literacy will still be there, but perhaps it will be called bot literacy in the not so distant future. My favorite resources for teaching web literacy are from Alan November, and these can apply to bots in the future. We will still need to understand and pay attention to issues Oremus points out like "transparency, privacy, objectivity, and trust" as we navigate though a high tech world interacting with bots. The full article on Slate is a great read. For even more information about this, check out The Search for the Killer Bot from The Verge.
A last thought for you to contemplate as you consider the role of bot literacy in the future: at a conference I was at recently, a speaker casually mentioned a conundrum that programmers of Google's driverless cars face. If the bot driving your car knows an accident is imminent and has to chose between saving your single life of the lives of more passengers in the oncoming vehicle, how should the program be written? Yikes! It sounds like a good problem for students to discuss in Bot Ethics 101 classes.
(Somewhat) related posts: