Q&A with Boston University experts about the artificial intelligence landscape in the United States
By Hilary Katulak, Boston University
Artificial intelligence (AI) has become a defining technology of the century and its potential is only just beginning to surface. In an effort to cement the United States’ reign as the global leader of digital innovation, President Trump signed an executive order outlining a new Federal Government strategy aiming to fuel progress in AI R&D and deployment.
While many are hopeful that this mandate will open new doors and create opportunities for both academia and industry, there are also questions and concerns about how the Administration will provide the proper funding necessary to support its mission.
Two of Boston University’s leading voices in AI and data sciences, Margrit Betke, researcher and professor of computer science, and Azer Bestavros, professor and founding director of the Hariri Institute for Computing at Boston University, give their reaction to the new executive order and its potential impact.
What is your reaction to Trump’s new executive order?
Betke: The singling out of artificial intelligence by the White House is aligned with the enthusiasm and preponderance of media coverage on the promises of AI research. The executive order comes after the White House claimed a year ago that its “2019 Budget invests in fundamental AI research and computing infrastructure to maintain U.S. leadership in this field” while in the same document proposing to cut the budget of the National Science Foundation, the major federal sponsor of basic research in AI for the academic community. Since then the subcommittee of the U.S. House of Representatives has held three hearings on artificial intelligence. Expert witnesses from academia, industry and federal funding agencies testified, and it is promising that the resulting bipartisan report by subcommittee leaders makes several recommendations that were taken up by the new executive order. Both the congressional report and the White House order mention funding for AI R&D at federal agencies, funding for graduate students, a directive to the National Institute of Standards and Technology (NIST) to develop technical standards, and development of data repositories that can be used to train AI systems. These are really promising ideas, and hopefully, our executive and legislative government branches will come to an agreement on appropriate funding to make these ideas become a reality.
Bestavros: I am particularly happy to see an emphasis on security, privacy, civil liberties, and American values in the executive order. As the leader of the free world, the United States should set the example when it comes to development of standards for how AI intersects with our society and our humanity. This is a race that China cannot win, given their focus on using AI to limit freedoms. The U.S. should strive to develop AI for a better humanity, and I am hopeful that this roadmap will be a step in the right direction.
The White House is expected to release more information about how the executive order will be carried out. What are you hoping to see included?
Betke: I hope the development of these action plans will be as open as possible and involve the academic research community and the public. Among the many details I’d like to see is how the government plans to achieve the stated goal that “The United States must foster public trust and confidence in AI technologies.” I’d like this to be turned around: We need to fund research on how we can analyze the trustworthiness of AI systems.
Bestavros: I am anxious to see if the executive order underscores the critical importance of academic research as opposed to research by Corporate America. While industry has a dominant role to play to ensure U.S. supremacy in some fields (e.g., big pharma, nanotechnologies, supercomputing, robotics, etc.), I do not believe that this will be the case for AI. Ultimately, corporate investments in AI research will be tied to the bottom line for companies. So, while I see industry as playing a big role for advancing commercially promising AI applications, I don’t see them investing in advancing AI in ways that are ethical or for applications or capacities that focus on public good. This is where academia shines, and the government must recognize this.
How will this mandate create new opportunities for the academic research community?
Betke: I believe there will be three kinds of opportunities: First, infusion of additional funding would dramatically support academic AI research. Second, there may be new opportunities for student fellowships for American citizens. The executive order specifically instructs the heads of federal agencies with fellowship programs to consider AI as a priority area. Only around 60 Americans have received federal fellowships to support their dissertation research in AI annually, which is relatively low. What is important to the AI research community that the government appropriates significantly more funds, so that the Federal agencies can support more PhD students in AI without having to take the funds from other fields. Third, the executive order asked for prioritization of programs that support the development of “curricula that encourage the integration of AI technologies into courses in order to facilitate personalized and adaptive learning experiences.” This is very exciting, as I believe AI-enabled personalized learning will cause a significant shift in higher education.
Will this focus on AI spur innovations in particular areas of AI, industries, and/or partnerships across industry and government?
Betke: The large internet companies such as Amazon, Alphabet, and Facebook will continue to innovate in AI whether or not the directive to develop new regulatory and non-regulatory approaches to empower AI use will be successful. What’s more exciting to me are the new opportunities for AI-enabled research in the social sciences and health sciences that can come out of better accessibility of data sets. The executive order mentions both government and business data inventories. We don’t have to start from scratch: Under the terms of the 2013 Federal Open Data Policy, newly-generated government data is required to be made available in open, machine-readable formats, while continuing to ensure privacy and security.
What are the biggest privacy and ethical concerns relating to artificial intelligence that the U.S. needs to address first?
Betke: We need to strongly consider the impact of AI. What are the benefits and harms in the use of AI? We need to understand the auditability, accuracy, explainability, and fairness of an AI system. What is the empirical basis for the system, how representative is the training set? What are the most influential items of information? What are the results of lab and field evaluations? Is the proposed application within the competence of the AI system? For example, the reliance on racially biased closed-source prediction software used in criminal sentencing and bail hearings is very worrisome.
What other factors play an important role in fostering AI innovation within the United States?
Bestavros: I would want to highlight the fact that supremacy in AI has been and will continue to be premised on assuring that the U.S. remains the preferred destination for top-notch researchers from all over the world. No matter how much money we put into initiatives, unless we are able to attract and retain the smartest people to work on advancements in AI, we will fall behind. While the executive order talks a bit about workforce development, the fact that barriers are erected (whether real or perceived) which discourage the international brain drain *into* the U.S. is alarming. This is an area where adjustments to our immigration laws (and more importantly how the world perceives our country) are needed to encourage the brain drain to go into the right direction — into the U.S. as opposed to out of the U.S.!
This article was originally published in Futurity. For additional commentary by Boston University experts, follow us on Twitter at @BUexperts. Follow the Department of Computer Science at @BUCompSci, the College of Engineering at @BUCollegeofENG and the Rafik B. Hariri Institute for Computing and Computational Science & Engineering at @BU_Computing.