The future of digital assistants is strange


Smart wife sex, in its simplest form, can mean giving digital assistants different personalities that more accurately represent the many versions of femininity that exist around the world, as opposed to the fun and submissive persona that many companies have chosen to adopt.

Stringers adds that Q would be a fair case of what these devices could look like, “but that cannot be the only solution.” Another option could be to bring masculinity in different ways. One example might be Pepper, a human-like robot developed by Softbank Robotics who is often credited with pronouns, and is capable of recognizing basic human faces and emotions. Or Jibo, another robot, introduced in 2017, that also used masculine pronouns and was marketed as a social robot for the home, although it has since been given a second life as a device focused on healthcare and education. Given the “gentle and androgynous” masculinity performed by Pepper and Jibo – for example, the former answering questions in a polite manner and often presenting a flirtatious appearance, and the latter often spinning awkwardly and approaching users with endearing behavior – Strengers and Kennedy see them as positive steps in right direction.

The use of digital assistants in Queering can also create bot personalities to replace human concepts of technology. When asked about his gender, Eno, Capital One’s baking robot launched in 2019, will reply comically: “I’m binary. I don’t mean I’m both, I mean I’m actually just ones and zeros. Consider me a robot.”

Likewise, Kai, an online banking chatbot developed by Kasisto – an organization that builds artificial intelligence software for online banking – is completely abandoning human characteristics. Jacqueline Feldman, the Massachusetts-based writer and user experience designer who created Kai, explained that the robot was “designed to be genderless.” Not by assuming a non-binary identity, as Q does, but by assuming a robot-specific identity and using the pronouns ‘he’. “From my point of view as a designer, a robot can be beautifully and charmingly designed in new ways of its own, without pretending to be human,” she says.

When Kai was asked if he was a real person, he replied, “A robot is a robot is a robot. Next question, please,” it clearly indicates to users that it is not human and does not pretend to be. And if you are asked about sex, you will answer, “As a robot, I am not a human. But I am learning. This is machine learning.”

The identity of the bot does not mean that Kai is being abused. A few years ago, Feldman also spoke about Kai’s intentional determination to be able to distract and stop harassment. For example, if a user repeatedly harasss the bot, Kai will respond with something like “I’m picturing white sand and a hammock, please try me later!” “I really did my best to give the robot some dignity,” Feldman told the Australian Broadcasting Corporation in 2017.

However, Feldman believes that there is a moral duty for robots to define themselves as robots. There is a lack of transparency when companies design [bots] They make it easy for a person interacting with a bot to forget that it’s a bot,” she says, and configuring bots or giving them a human voice makes this even more difficult. Because many consumer experiences with chatbots can be frustrating, and many people prefer to talk to Someone, Feldman thinks, conferring human qualities on robots could be a case of “overdesigning”.



Source link

Share:

Leave a Reply

Your email address will not be published. Required fields are marked *

GIPHY App Key not set. Please check settings