Design:
- Automobiles are the most powerful and dangerous tools many of us encounter on a daily basis, so in the 100+ years after introducing the Model T, you would think best practices regarding control interfaces would be robust. A recent recall from Jeep demonstrates that reckless "innovations" sometimes make it through product development cycles over the safe but familiar. Similar to the pattern of market deregulation leading to economic catastrophe, when we drift away from firsthand experiences of harm, we can easily forget the vital reasoning behind the design rules of a system or product.
Automatons:
- Fine-tuning the friendliness of security robots to provoke the right amount of caution/respect/anxiety. It's an interesting little study in human psychology and aesthetics- how to make an autonomous machine (with many human-exploitable vulnerabilities) project power but stop short of provoking fear? Still, how a robot looks is a small slice of how we relate to it. The design and engineering of non-human security actors requires careful consideration to avoid a terrestrial equivalent to military drone programs where machines hopped up on flawed databases are free to take extrajudicial actions. The more likely threat is that these rent-a-cop robots will follow in racist profiling tactics, adding the possibility of being tagged as suspicious and dropped into the deep, invisible spaces of proprietary databases, your presumptive guilt distributed across an entire network instantly.
- The European Union Parliament recently put forward a draft report of recommendations for the Commission on Civil Law Rules on Robotics, the media response was a mess of hyperbolic takes full of heat but little light. The actual report is fairly reasonable and gets at some essential questions for a future of truly autonomous software and machines that learn and act without human handlers. As with many legal questions, it emerges from a need to sort out liability/accountability. If a device is acting on its own, causing harm or making its own contracts, is it reasonable for liability to fall on the designer, engineer, manufacturer or none of the above? If the liability belongs to the machine, the report argues that a new legal class ought to exist. It suggests a category of "electronic persons" which has contributed to some of the knee-jerk coverage. The phrasing demonstrates how clumsy our current language is for dealing with ideas so new and profound. Physical robots may be the most visible aspect, but the report reminds us that layers of legal constructs and the language we use to describe and define are technologies themselves- enmeshed with each great leap of invention, requiring their own innovations.
Roadmapping the Future:
- A conversation among indigenous artists about futures, technologies and how to continue culture over generations of rapid technological change. As they point out in their discussion, whether it's classic sci-fi authors like H.G. Wells and Arthur C. Clarke or contemporary tech tycoons like Elon Musk, the voices that get quoted to forecast the future are often white, straight, male, and not at all reflective of the world at large.
Bias and Brains:
- Bloomberg and The New York Times both ran pieces this week about the fact that the field of AI is largely composed of white, affluent men and the negative implications that has for everyone else. The Bloomberg article speaks to some of the functional roots of bias - like coded, gendered language in job descriptionsthat some studies have found to discourage female applicants. The New York Times article has some examples of bad bias leading to some ugly behaviors in shipped software that could have been easily predicted or detected if the pool of engineers, designers or testers were at all representative.