![]() ![]() Take the AI research community, which has little need for legal precision: its vague definitions of AI have attracted funding to a broad set of problems and maintained a coherent research community, even as notions of which approaches are most promising have evolved dramatically. Human-based definitions naturally accommodate advances in technology. One influential definition describes a “machine-based system” that produces “predictions, recommendations, or decisions.” Department of Defense strategy defines AI as “the ability of machines to perform tasks that normally require human intelligence.” By contrast, capability-based definitions describe AI through specific technical competencies. Human-based definitions describe AI with analogies to human intelligence. The first trade-off pits definitions based on humans against ones based on specific technical traits. But despite the difficulty of these tradeoffs, there’s often a way for policymakers to craft an AI definition well-suited to the specific application in question. ![]() In attempting to better define AI for legislation or regulation, policymakers face two challenging trade-offs: whether to use a technical or human-based vocabulary, and how broad of a scope to use. And some technologies commercially marketed as AI are so straightforward that their own engineers would describe them as “classic statistical methods.” ![]() ![]() Researchers typically refer to techniques that infer patterns from large sets of data as “machine learning,” yet the same concept is often labeled “AI” in policy-conjuring the specter of systems with superhuman capabilities rather than narrow and fallible algorithms. Subtle differences in definition-as well as the overlapping and loaded terminology different actors use to describe similar techniques-can have major impacts on some of the most important problems facing policymakers. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |