Discussing the Political Questions of the “Smart Enough City” with Ben Green (University of Michigan)
This blog post is part of our ‘Data, Privacy, and the Future of Trust in Public Institutions’ series, which is penned by Garrett Morrow, who was MetroLab’s Experiential Research Fellow during Fall 2020. To learn more about this series and to access other posts in this series, check out our first post here.
On December 1, 2020, I had the opportunity to talk with Ben Green, Assistant Professor and Postdoctoral Scholar at the University of Michigan’s Gerald R. Ford School of Public Policy. In 2019, Ben published his book The Smart Enough City: Putting Technology in Its Place to Reclaim Our Urban Future (MIT Press) that applies a critical eye to smart city policy applications. The book served as a launching point to discuss municipal use of smart cities, data, algorithms, and their effects on public trust. We spoke about his previous work at the Department of Transportation in New Haven and the Department of Innovation and Technology in Boston, his PhD in applied mathematics at Harvard, and his current work around smart city or algorithmic policies.
In my own work, I have struggled with defining the scope of what a “smart city” is that gets beyond the buzzwords. I have discussed these programs in terms of them being “algorithmically driven decision-making tools.” But conceptualizing the programs is a challenge and during our conversation, Ben sympathized with this question which is why uses the concept of the “smart enough city.” Ben’s critique of the smart city concept is that technology vendors and some people in city governments view technology as a panacea to urban problems, when really the problems are political issues that cannot be solved simply by implementing new information communication technology infrastructure. Indeed, a lot of the intelligence needed to solve urban issues is likely already in the city, but it may take an unusual form.
Getting beyond the term “smart city” is difficult because of its centrality in the conversation around these programs. Despite best efforts to redefine or get beyond the term, we still must grapple with the term because it is how people are framing the programs. By not using the terminology, it can be difficult to be in conversation with the people who need to think more critically about algorithmic programs. In short, you must use the term “smart city” to engage while still being critical of it, but by using the term, it centralizes the concept.
Ben has tried to push beyond “smart cities” and his work currently focuses on machine learning algorithms and their relationship to public policy, specifically within the criminal justice system. Ben’s work and his public policy teaching at the University of Michigan asks the questions of what the design process of these algorithmic programs currently looks like. For example, how might one design an algorithm in the public interest that bridges technical questions with contextual, political questions. Algorithmic fairness is at the core of the conversation around criminal justice algorithms. Ben is interested in a more holistic view of fairness that often gets ignored by looking only at the limitations of technical approaches that try to develop and “optimize” fairness. Again, aligning with Ben’s critique of the smart city concept, the problems are not ones can be made “fairer” by changing variables or inputs, but rather a political question of how we better integrate technology into urban governance.
Criminal justice applications, like risk-assessment algorithms, are clear applications of algorithms to public policy where the biases and injustices are visible, but algorithms are being applied to other policy areas that are less visibly problematic but still of concern. Ben sees questions of fairness and non-discrimination as central ideas to criminal justice reform, and similar ideas apply to applications of algorithms to human services. For example, algorithms predicting child welfare and child abuse overlaps with criminal justice. A more hidden algorithm application is to fraud detection, especially surrounding unemployment fraud. Ben believes that many of these programs have been catastrophes and not even really answering technical questions, but rather originate out of austerity politics and privatization. Public health, on the other hand, is a policy area where Ben sees potential opportunity and optimism for algorithm application.
While algorithmically driven programs spread from policy area to policy area, there is still the question of why cities are implementing these types of programs in the first place. Ben thinks that a lot of grand smart city plans are being framed around improving services. In other words, “smart city” plans are framed about trying to deliver more and better city services with less resources. Ben does not see many cities trying to seize power through smart city programs, but this is a gradual process and may help explain recent pushbacks by cities against rideshare companies. For example, the city of Austin used its legal power to pushback against Uber and Lyft, though this law was ultimately preempted by the Texas state legislature. Another example may include cities passing surveillance oversight legislation.
Accountability and transparency, especially surrounding surveillance technologies, is currently driven externally by activism and advocacy groups. Ben thinks that people on city councils are generally on board with these measures, but a big actor in the space are police departments that are difficult to pushback against. There is also a gap in what cities are saying and what they are doing. For example, New York City’s algorithm decision system task force often talked about how they wanted transparency but were just stonewalling efforts. Advocacy and activism are ironically the main ways in which technology is spurring new collective action and democratic engagement. Cities have tried to use technology applications to improve democratic engagement with residents, but Ben has noticed that it is often technologies like surveillance or other criminal justice applications that has encouraged collective action against the technology itself.
The involvement of universities is another question about smart city technology. Specifically, their role in urban governance. Ben sees a lot of potential in universities because of their great technical capacity and ability to do things that cities just do not have the ability to do. The City of Boston being a good model for this. However, there is often a mismatch of incentives between cities and universities that is difficult to resolve. For example, cities want to engage with technical experts at a university with thoughtful governance applications, but many experts like computer scientists are more interested in just creating a new technical system. There is also the issue of turnover stemming from tenure systems and class structures.
Finally, given the circumstances, my last question for Ben was about what he saw as the short- and long-term implications of COVID-19 on smart city programs. Ben answered the question along two lines of thought. One, Ben sees a revival of the logic that led to smart city programs in the first place. As cities face budgetary shortfalls, cities will return to the austerity politics of trying to do more with less. Two, Ben is concerned with a revived interested in surveillance technology that is applied to COVID detection and monitoring for future pandemic signals.
There is no doubt that cities will continue to develop and implement new forms of algorithmic governance. We should always be aware of the thread connecting grand smart city plans for service improvement or economic development and actual algorithmic policy development in impactful policy areas like criminal justice. Surveillance, risk-assessment algorithms, and smart cities are all part of the same conversation even if they do not seem overtly connected. As Ben described, COVID-19 will no doubt reinforce existing logics of algorithmic program development and may be granted new legitimacy in light of changes from the pandemic. It is not clear what new smart city programs will be, but improved engagement by civil society and academia may help improve the trajectory of our algorithmic future.
Garrett Morrow is a Ph.D. candidate in Political Science at Northeastern University. His dissertation looks at the politics and public trust of smart city policies, data, and algorithms. During Fall 2020, Garrett was also an Experiential Research Fellow with us here at MetroLab. His work was funded through the College of Social Sciences and Humanities at Northeastern. Garrett can be reached at morrow.g@northeastern.edu.