Good lord, I wasn't recommending *using* it. Google Now seems to have been, like most of their features, programmed by people who have no idea that boundaries exist for other people, and think that, if they just give it enough data, they can create an AI that will be the perfect mother they never had. Or something. GNow scrapes data from any and all sources, and is effectively a Trojan to acquire more data about the wallet attached to that data. I'm guessing the reason they're not pushing it now is that they don't need the user to connect the multiple streams for them any more, that they are able to do it automatically.
(This is also a reason not to use Waze, which Google bought.)
I use a similar standard as you: assistants should be like the perfect English butler, anticipating needs without being intrusive, and having a double-sized helping of tact. (Ironically, such butlers usually aren't found in fiction, because the pushy sort--Jeeves, Alfred, Jarvis--make much better characters.)
There is a lot of research in the grey space of shared intent, but most of my knowledge of it is a decade out of date—”mixed-initiative" systems, where volition is shared more evenly by human and computer. Inferring intent is extremely hard, but well-studied; inferring boundaries is also extremely hard, and doesn't seem to get much research at all.
Coincidentally an AI researcher asked for paper recommendations on the intersection of AI and ethics yesterday, but I can't seem to find the link...
(no subject)
Date: 2017-07-19 01:33 pm (UTC)(This is also a reason not to use Waze, which Google bought.)
I use a similar standard as you: assistants should be like the perfect English butler, anticipating needs without being intrusive, and having a double-sized helping of tact. (Ironically, such butlers usually aren't found in fiction, because the pushy sort--Jeeves, Alfred, Jarvis--make much better characters.)
There is a lot of research in the grey space of shared intent, but most of my knowledge of it is a decade out of date—”mixed-initiative" systems, where volition is shared more evenly by human and computer. Inferring intent is extremely hard, but well-studied; inferring boundaries is also extremely hard, and doesn't seem to get much research at all.
Coincidentally an AI researcher asked for paper recommendations on the intersection of AI and ethics yesterday, but I can't seem to find the link...