Home / Technology / Before we put $100 billion into AI …

Before we put $100 billion into AI …

America is poised to invest billions of dollars to remain the leader in artificial intelligence as well as quantum computing.

This investment is critically needed to reinvigorate the science that will shape our future. But in order to get the most from this investment, we have to create an environment that will produce innovations that are not just technical advancements but will also benefit society and uplift everybody in our society.

This is why it is important to invest in fixing the systemic inequalities that have sidelined Black people from contributing to AI and from having a hand in the products that will undoubtedly impact everyone. Black scholars, engineers, and entrepreneurs currently have little-to-no voice in AI.

There are a number of bills coming through the House and the Senate to invest up to $ 100 billion in the fields of AI and quantum computing. This legislation, for example, the one from the House Committee on Science, Space, and Technology, makes references to the importance of ethics, fairness, and transparency, which are great principles but are not precise and lack a clear meaning. The bicameral Endless Frontier Act would effect transformational change to AI but is similarly unclear about how it would remedy institutional inequity in AI and address the lived experience of Black Americans. What these bills do not address is equal opportunity, which has a more precise meaning and is grounded in the movement for civil rights. These substantial investments in technology should help us realize equity and better outcomes in tech research and development. They should ensure that the people building these technologies reflect society. We are not seeing that right now.

As a Black American, I am deeply concerned about the outcomes and ill-effects that this surge of funding could produce if we do not have diversity in our development teams, our research labs, our classrooms, our boardrooms, and our executive suites.

If you look at companies building AI today — like OpenAI, Google DeepMind, Clearview, and Amazon — they are far from having diverse development teams or diverse executive teams. And we are seeing the result play out in the wrongful AI-triggered arrest of Robert Williams in January, as well as many other abuses that go under the radar.

Thus, we need to see these substantial government investments in AI tied to clear accountability for equal opportunity. If we can bring equal opportunity and technological advancement together, we will deliver the potential of AI in a way that will benefit society as a whole and live up to the ideals of America.

How do we get at the problem?

So, how do we ensure equal opportunity in tech development? It starts with how we invest in scientific research. Currently, when we make investments, we only think about technological advancement. Equal opportunity is a non-priority and, at best, a secondary consideration.

This is the entrenched system of innovation that we are used to seeing. Scientific research is the spring-well that fuels advancements in our productivity and quality of life. Science has yielded an incredible return on investment across our history and is continually transforming our lives. But we also need innovation inside our engine of innovation as well. It would be a mistake to assume that all scientists are enlightened enough to engage, train, mentor, cultivate, and include Black people. We should always ask: What is the bottom line that incentivizes and shapes our scientific effort?

The fix is simple really — and something we can do almost immediately: We must start enforcing existing civil rights statutes for how government funds are distributed in support of scientific advancement. This will mostly affect universities, but it will also reform other organizations that are leading the way in artificial intelligence.

Think of the government as the venture capitalist that specifically has the interest of the people as its bottom line.

If we start enforcing existing civil right statues, then federal funding of artificial intelligence will create a virtuous cycle. It is not just advanced technology and ideas that come out of that funding. It is also the people produced from supported research labs who are trained in how to engineer and innovate.

And research labs have an impact on the science classrooms. The faculty and students engaged in research are also educating the next generation innovation workforce. They impact not only who is in the classroom environment but also who gets opportunities on the development teams that define the industry. Government funding should remind universities of their responsibility to mentor and grow future generations, not just pick winners and losers by grade policing.

If we fix how we invest in science with this massive influx of money, we can produce more enlightened innovators that will produce better products — and AI that will help remedy some of the troubling things we are seeing right now with the technology. We will also be able to produce new technologies that expand our horizons beyond our current imaginations and dogma.

How do we enforce civil rights for AI R&D?

If a research lab or a university degree program is not diverse and not creating equal opportunity as required by law, then it should be ineligible for federal funding, including research grants. We should not fund researchers in computer science departments that have only yielded token representation of Black students in their graduating classes. We should not fund researchers who have received millions in public money but have never successfully mentored a Black student. Instead, we should reward researchers who achieve both inclusion of Black scholars and scientific excellence in their work. We should incentivize thoughtful and considerate mentorship by researchers, as we would want for ourselves, our own children, and our tuition dollars.

We should look at equal opportunity the same way as we look at investing in the stock market. Would you invest in a stock that has not shown any growth — that has stagnated and come to perform badly? It is unlikely anybody would put their own money in that stock unless they saw evidence growth will occur. The same should hold true for university departments that build their prestige and economic viability primarily from money granted by the American taxpayer.

Who would be responsible for making these decisions? Ideally, it would be done by federal funding agencies themselves — the National Science Foundation, the National Institutes of Health, the Department of Defense, etc. These agencies have yielded an immense return on investment that has enabled American innovation to grow exponentially over the last century, but their view of merit needs to be rethought in the context of 2020 and the realities of our new century.

The hard part

I wrote earlier that this was an easy fix. And it is, on paper. But change will be difficult for research institutions because of their entrenched institutional culture. The people who are in positions to make the necessary change have come up through the system. And so they do not necessarily see the solution — or the problem.

I am a Professor of Computer Science and Engineering at the University of Michigan. I have worked in robotics and artificial intelligence for over 20 years. I know the feelings of elation and validation from winning large federal grants to support my research and my students. Few words can describe the sense of honor and acknowledgment that comes with federal support of one’s research. I still swell with pride every time I think about my opportunity to shake President George W. Bush’s hand in 2007 and the congratulatory note in 2016 from my congressional representative, Rep. Debbie Dingle, for my National Robotics Initiative grant.

I also understand from experience how hard it is to see things from the inside. If we make the analogy to law enforcement, it is very much like the police policing the police. We are the people that are producing the technology innovation and benefiting from the funding, but we are also responsible for reviewing ourselves. There is little external accountability, with only “evolving” attempts at broadening participation from within.

I am neither a lawyer nor a member of the civil service, to be very clear. That said, this moment in our history is an opportune time to reimagine equal opportunity throughout the federal research portfolio. One possibility is through the creation of an independent agency that analyzes and enforces equal opportunity across programs for federal funding of scientific research, in contrast to dividing this responsibility among individual sub-agencies solely within the Executive Branch. Regardless of implementation, it is essential that we continually oversee the policies and practices of funding in artificial intelligence to make sure there is proper representation and diversity included and to ensure that our federal funding is not going to be spent without consideration of different viewpoints on how technology should be built, and of the larger systemic issues at play.

What you can do

The time to act on this is now — before the funding begins. When it comes to discrimination and racism, we must address both the hidden “disparate impact” in our systems of innovation as well as the traditional explicit “disparate treatment” (such as the vividly portrayed in the 2016 movie Hidden Figures).

For those who want to act, you can first look at your own organization and your own working environments and see whether you are living up to the civil rights statutes. If you are interested in translating protest into policy, write to your representatives in Congress and your elected officials and tell them equal opportunity in AI is important.

We should also ask our presidential candidates to commit to the kind of accountability I have outlined here. Regardless of who is elected, these issues of artificial intelligence and equal opportunity are going to define our country for the next few decades. It is a national priority that demands our attention at the highest levels. We should all be asking who is developing this technology and what is their motivation. There is so much to be optimistic about in artificial intelligence — I would not be in this field if I did not believe that. But getting the best out of AI requires us to listen to all perspectives from all walks of life, engage with people from all zip codes across our country, embrace our global citizenship, and attract the best people from around the world.

I truly hope someday equal opportunity in AI will just be commonplace and not require such challenging discussions. It would be a lot more fun to make the case for why nonparametric belief propagation will become a better option than neural networks for more capable and explainable robot systems.

Chad Jenkins is an Associate Professor of Computer Science and Engineering and Associate Director of the Michigan Robotics Institute at the University of Michigan. He is a roboticist specializing in computer vision and human-robot interaction and leader of the Laboratory for Progress. He is a cofounder of BlackInComputing.org.

Let’s block ads! (Why?)

VentureBeat

About

Check Also

SAG-AFTRA hits out at AI Taylor Swift deepfakes and George Carlin special, calls to make nonconsensual ‘fake images’ illegal

The Screen Actors Guild – American Federation of Television and Radio Artists (SAG-AFTRA) put out …