Curious about how people are using the new bike sharing systems that have been popping up in cities around the world? The bike sharing system in Boston, Hubway, has released detailed data for roughly half a million trips as part of a data visualization challenge. Sponsored by Hubway and the Metropolitan Area Planning Council, competition winners will receive a free Hubway membership and original system maps installed at stations. Submissions are due by October 31. To submit or learn more, visit hubwaydatachallenge.org.
I just posted a new article on the Planetizen blog: “The Coming Urban Data Revolution“:
Historically, data sources for urban planning have remained relatively stable. Planners relied on a collection of well-known government-produced datasets to do their work, including statistics and geographic layers from federal, state and local sources. Produced by regulatory processes or occasional surveys, the strengths and limitations of these sources are well known to planners and many citizens. However all this is beginning to change. Not only has the U.S. Census Bureau’s American Community Survey introduced a bewildering variety of data products, all with margins of error, three interrelated categories of new data are growing rapidly: crowdsourced, private, and “big” data.
I posted an open government reading list to CoLab Radio, a blog run by the MIT Community Innovators Lab. Suggestions or comments are welcome!
Although a majority of Americans have already returned their Census forms, technically today is the “Census Day” for the purposes of determining where people should be counted. How is your community doing? Take a look on the Census Bureau’s nifty participation rate map, where you can get a widget for the national participation rate or any county, city, or tract in the country.
With last year’s Gov 2.0 Summit and the explosion of social networking service GovLoop, “government 2.0″ has become a buzzword in technology and government circles. What does government 2.0 refer to? And what exactly was the government 1.0 that we’re improving on? This article attempts to define the term and unearth some of the hidden assumptions and implications that result from applying concepts developed in Silicon Valley technology startups to the complex and age-old problem of governance.
The term government 2.0 is a deliberate reference to the term “web 2.0,” coined by publisher Tim O’Reilly to refer to interactive, social websites like Wikipedia and Facebook, which have revolutionized how people use the web. Before delving into the meaning of government 2.0, we should consider government 1.0, the government analogue to web 1.0. Although less common now, the term most often used for this initial approach to technology in government is e-government.
The Center for Technology in Government defined e-government as having three components: e-management, e-services, and e-democracy. The first two have been largely realized. Governments have adopted, to varying degrees of sophistication, internal information technology systems such as networks, databases, and intranets. As we will see, government 2.0 practices often rely on these underlying systems. Governments have long provided e-services to constituents through websites, email, or APIs, including tax payments, service requests, and digital applications and paperwork. The last component, e-democracy, has been more elusive. In the web 1.0 world, this has most often meant emailing elected officials or signing petitions on topics. These activities have grown, although in the U.S. context exist mainly outside of government websites or structures.
At a lecture hosted by the Kennedy School Government 2.0 Professional Interest Council this fall, Nicco Mele suggested we adopt Tim O’Reilly’s web 2.0 principles as a starting point for government 2.0. My essay builds on his interesting lecture.
1. Government as Platform
O’Reilly’s first principle is “the web as platform,” adjusted for our purposes to be “government as platform.” The most obvious examples of this are where government agencies provide data or host competitions to encourage creative ideas that serve the public interest. The “apps” competitions in Washington, D.C. and New York and sponsored by the Massachusetts Department of Transportation, are a start to this trend. In these competitions, government provides the data, and an ecosystem of third party developers and tools helps unleash the value for the public, creating new tools, resources, and analyses.
Another example where government acts as platform is the phenomenon of participatory budgeting, pioneered by cities in Brazil and now has spread to a number of cities around the world. This approach puts budgetary decision-making, or some part of it, directly in the hands of citizens, bypassing existing representative models of decision-making. The technical dimensions of this are only now being explored, and in the Brazilian case above deliberation and voting online complemented conventional public meetings.
When it comes to service delivery, it is less clear what “government as platform” means. It may echo a broader political agenda that has sought to re-define the role of government through systematic privatization of formerly government functions, such as education or public services. After all, when governments provide educational or housing vouchers, aren’t they acting as the intermediary, or a platform? The political implications of shifting government from a service provider role to a facilitating role deserves consideration. This issue is connected to a host of issues surrounding contracting and public private partnerships. Governments may want to retain some types of service delivery if the good cannot be contracted for, or the public wants to enforce certain service standards.
2. Harnessing Collective Intelligence
The second principle is “harnessing collective intelligence.” Obama’s Memorandum on Transparency and Open Government identified collaboration as a policy goal for the federal government. In fact, Obama’s Deputy Chief Technology Officer for Open Government Beth Noveck experimented with collaboration tools to create an open government policy last summer. In other areas there are limited successes of citizen-government collaboration. The Peer to Patent program pools expert opinion to speed the patent process. The Next Stop Design project in Salt Lake City, Utah used crowdsourcing to select the design for new bus shelters. One of the people involved in the project, Daren Brabham, is writing a PhD dissertation on the application of crowdsourcing to public problems. In Melbourne, the consulting firm Collabforge ran a wiki as a component of a conventional planning process to generate the new city plan.
Fundamentally, this trend will face several types of powerful resistance.
First, it can run counter to traditional concepts of representative democracy, where elected officials work “down” through an expert bureaucracy to create and implement policy. Archon Fung has proposed “empowered participation” can be deployed as a governance method for specific issues, such as Chicago’s school committees or neighborhood policing committees. However, creating these structures depends on modifying existing forms of governance. Existing projects have avoided this in several ways. The apps competitions aren’t about creating policy, and the government hosts can always disavow responsibility. Idea-generation contests usually reserve final decisions to designated juries. Policy-creation projects retain the final decision-making power with conventional authorities. However, pushing this further into what Beth Noveck calls “wiki government” will require addressing this tension with existing practices.
Second, a host of public problems require technical expertise to analyze or solve. The question of how to integrate technical forms of knowledge with citizens is far from resolved. The cutting edge involves putting modeling tools in the hands of citizens, who use them as “decision support tools,” but this runs counter to existing models of professional practice and the very real need for significant expertise to complete complex analyses.
Lastly classified data and national security, a major governmental function, may never be opened to the public. Interestingly, Department of Defense has been interested in the collaborative potential of internal communication across their vast bureaucracy through wikis, for example launching a wiki to improve the Army Field Manual.
3. Open Data Standards
The third principle is the use of data standards. Expanding access to government data is a major trend, with initiatives underway at the federal, state, and local level to create data portals. The concept of linked data, emerging out of the Wikipedia project, seems poised to move into government datasets. In fact, greater linking and cross-comparison among the expanding amount of available government data will create a positive pressure to ensure cross-compatibility. Within Massachusetts state government, for example, town-level data has become a standard for comparison and analysis. With the federal government in setting metadata and other standards already, this may happen slowly but some signs are already in place. Using this to evaluate government may be misleading: the primary purpose of government isn’t to create data, although it is an important one. The technological viewpoint threatens to be reductionist, viewing the government as primarily engaged in collecting and hosting data. In reality, most money and effort in government is spent on delivering healthcare, education, national defense, grant programs, and regulatory actions, where data can play a supporting role (perhaps as indicators) but is not even always a mandatory input to governance.
In Boston, the author of a recent major report studying the city’s transit agency said in November he wouldn’t ride the busy Red Line due to serious maintenance issues that threaten to cause a train derailment. At roughly the same time, data enthusiasts were demanding real-time data about bus and train arrivals at the MassDOT developers conference. When our transit systems are in real danger of catastrophic failure, shouldn’t we spend all available funds preventing disaster for the existing riders, rather than inventing technology to make use more convenient? How can these important goals be balanced properly?
4. Customer Service
The last principle discussed by Nicco is customer service, based on O’Reilly’s “rich user experience.” An emphasis on customer services is undeniable at all levels of government. Cities have launched successful 311 systems for managing citizen requests, and governments have been subscribing to the “plain language” movement make government information more understandable and usable to citizens. However, just like “government as platform,” this principle too often reduces government to a consumer-producer relationship where the government provides services just like private firms might in the marketplace. Customer service is important, but so is engaging with citizens to generate ideas and implement solutions. In exchange for expecting service, citizens have the responsibility to understand the resource and legal limitations of government.
5. Incremental Policy
O’Reilly has several additional principles: end of the software release cycle, lightweight programming models, and software above the level of the single device. Of these, I think the principle for government is the advent of more iterative forms of policy making. The field of planning has developed theories of incrementalism or “muddling through,” to reflect the real-world pace of change. The web supports both short bursts of activity but also long-term archiving, and professionals are only now learning how to use the tools to develop sustained interest and engagement through ongoing conversations and communications.
What do we learn from this exercise? First, I’m not sure government 2.0 is yet a new type of government, instead a collection of promising trends. The adoption of new social and technical approaches of idea creation and governance don’t resolving age-old questions about what government should be doing, and how it should approach principles of equity and justice. In fact, what could emerge is a new, technically-enabled model of in the tradition of the “developmental state,” the concept that the state itself is engaged in economic and community development. This is perhaps the most important lesson of these trends: existing government processes should be examined and where they are not working be re-invented to take advantage of the ability of technology to expand the activity of governance beyond the institutions of government.
Last week’s Open Cities conference, sponsored by the Rockefeller Foundation and Next American City, brought together a diverse group to discuss the role of new media in shaping urban policy. One of the major topics discussed was the emerging trend of cities establishing data catalogs where a wide range of datasets and feeds are made available, often with the explicit goal of enabling private apps that will use the data to create value. Washington, D.C.’s data catalog is a national leader, and San Francisco, Boston, and others not far behind. (Through sheer coincidence, New York City announced their BigApps contest during the conference.) In addition to the city-led programs, a host of other sources — from Google Transit to Data.gov — are making urban data more available than ever.
Within government, data can be a powerful tool for management and service delivery. Baltimore’s CitiStat and its emulators have shown the power of data to focus on the bottom line for easily quantified government services and policies. Applications for e-management within government are many, and today’s New York Times story on IBM’s Smarter Cities initiative describes several.
Outside of government, the case is less clear. Some at the conference questioned whether governments should expend their limited resources on finding, cleaning, and publishing data. I think this debate is largely won. The costs of hosting data has dropped precipitously, most of the datasets have already been purchased by citizen tax money, and the resulting apps really do seem to create new value for city residents. Less clear, however, is whether disclosing data to the public will have any impact on urban policy.
It is this deeper question that lurks in the background of conversations about data: although more and more may be available, does influence urban policy or planning? A conference attendee who works for the mayor of a major east coast city suggested this at one point: in his opinion the city was driven by politics, not data.
On the one hand, data seems very needed in planning. Urban planners analyze data to understand trends, and every city plan contains detailed tables, charts, and data analysis. Outside government, community development corporations and nonprofits are also frequent data users: for grant applications, advocacy, and to explore trends in urban neighborhoods. In fact, hundreds of government planners, nonprofit employees, community activists and citizens came to the conference I helped organize here in Boston last summer titled “Data Day: Using Data to Drive Community Change.”
However, the cynic will retort there are “lies, dammed lies, and statistics.” Certainly, government planners and activists need data, the argument goes, but it’s just to support their particular agenda or policy. Taken to the extreme, this jaded view says you can find statistics to back up any belief.
This wasn’t always the case. In fact, for a brief period in the 1960s there was a great deal of interest about the possibility of establishing “social indicators” analogous to economic indicators. Just as economic indicators, such as unemployment rate, are used to determine economic policy, social indicators would guide social policy. Judith Innes in her 1975 book Social Indicators and Public Policy argued social indicators could be created, but must rely on a consensus understanding of definitions and measurement. The book’s fascinating history of the unemployment rate shows how the measurement has responded to cultural values about who to count. Despite thousands of books and articles on indicators in the late 60s and early 70s, the movement didn’t take off as expected. Defining social indicators was value-laden, collecting social data expensive, and focusing on data seemed irrelevant to a turbulent, problem-filled world. It’s little wonder when the second edition of Innes’ book appeared in 1990 it was re-titled Knowledge and Public Policy.
Although falling short of her definition of an indicator, many government datasets do provide a common framework for discussion and analysis, even perhaps guide policy creation. Although often imperfect, their flaws are well known by all users. In the 1990s, a number of “indicators” projects emerged, organized as the National Neighborhood Indicators Partnership. Generally based in nonprofits or foundations, these projects took advantage of new technology and plentiful government data to track measures of their choosing. (At MAPC, I worked closely with the Boston affiliate – the Boston Indicators Project)
Today, thanks to rapidly evolving technology more urban data is available than ever. Its role is equally ambiguous, simultaneously in demand by diverse users to use for advocacy, government service delivery, and perhaps crafting urban policy. At the conference, federal officials reminded the group the Obama administration is interested in evidence based governance, and President Obama even elevated the former architect of the D.C. data catalog, Vivek Kundra, to the nation’s first Chief Information Officer. In an interesting way, perhaps during times of concern for the public interest we are more likely to view data as a shared resource for deliberation and discussion of new policies and plans. We may be in a new era of data availability, but as always what matters isn’t the numbers themselves, but how we view them.
You can watch many of the sessions and even submit questions online.