Posted: May 31st, 2013 | Author: Rob Goodspeed | Filed under: Government, Technology | 1 Comment »
On the Radical Anti-Institutionalism of Internet Intellectuals
I recently attended a talk by Ethan Zuckerman, the director of the MIT Center for Civic Media advertised as addressing the question “is digital media changing what it means to be an engaged citizen?” As a blogger, founder of three hyperlocal news websites, and now student of technology in (government-led) urban planning, I was interested in what he would have to say. The talk, which lasted at least 45 minutes, was ostensibly on this, but I found it deeply troubling. Zuckerman, officially a “research scientist” at MIT, talked about government, media, technology, and society with little mention of any previous research. I waited –- in vain –- to hear even the barest mention of previous thinkers on these topics (for example: the Federalist Papers, de Tocqueville, Dewey, Lippman, Schumpeter, Arnstein, Pateman, Castells, Fung, work in communications and political science journals, etc.). (It looks like what I heard was very similar to his keynote at a recent conference, see notes and a video of the talk.)
I have no doubt he is familiar with some of these thinkers, and probably has something to say about them. But they were largely omitted from the talk aside from perfunctory mention of several recent popular books. This presents a puzzle: why? Is it possible to conduct research on a topic by ignoring previous research? If so, why would you want to do it?
In hindsight, I should have not been surprised. The talk was an example of a broader milieu of “Internet intellectuals” who eschew previous research and thinking, or treat it in fragmentary or limited ways. I am sympathetic to the arguments put forward by Evgeny Morozov, who has gleefully attacked the most notoriously vapid talking heads (notably Tim O’Reilly and Jeff Jarvis), but I lack his indignation. The standards for public debate have always been fast and loose, and it’s probably good that way. In the long sweep of history, the junk is forgotten and gems of insight are remembered. However, academia is different. There is a norm of at least acknowledging other points of view, and attempting to take them seriously. Certainly, deep cleavages remain between fields, but they are for the most part about differing basic assumptions about reality or how to conduct research, not a willful ignorance of the alternatives. Good scholars read broadly and continually question their own basic assumptions.
In light of this, I concluded Zuckerman was choosing to omit previous research from this talk because he believes, at some level, that it is not required for the discussion. Perhaps the Internet has rendered everything totally new. We need new ways of thinking, and therefore the old ideas are not only a hindrance but must be deliberately exorcised. There is something to this point of view. Existing theories and worldviews are powerful blinders, and good empirical research often involves looking closely at reality before considering how to interpret it. This is required to make new insights, whether tweaks to earlier theories or totally novel ideas. However, research cannot be done in a total vacuum. All new knowledge is related, in some way, to previous thinking. Even boldly innovative work should acknowledge the flawed models it hopes to supersede.
Working on my dissertation, at the urging of my dissertation committee member Annette Kim, I explored some of the literature on institutions in society. Scholars in multiple fields began to realize in the late 80s that institutions play a key role in understanding the economy, government, and other social phenomenon. Economists focused on analyzing why institutions might be efficient for “rational actors,” but sociologists pointed out that institutions consist not only formal rules and structures, but broader cultural norms and ways of thinking. For example, companies follow lots of standard practices not because they objectively know they are efficient, but because they are influenced by broader cultural assumptions about how things should be done. Sociologists also argue that organizations and companies have missions broader than simply making a profit. In some deep way, institutions actually organize our behavior and create structures of meaning.
I realized the Internet-centric worldview is radically oriented towards deinstitutionalization. In short, since institutions are “socially constructed” (meaning they exist partly because we believe in them), if you get enough people to believe they don’t exist they actually won’t. Now, this may be useful if the institution you are attacking has no value or is harmful, but I’m not sure this is always the case. I don’t fantasize about the possibility of some type of benevolent anarchist utopia. Not only because my political opinions differ, but also because I think institutions perform valuable functions for us.
An example will help illustrate this. In a classic study of an impoverished town in Southern Italy in the 1950s, political scientist Edward Banfield (whose later work I would criticize) tried to figure out why it lacked a functional government and anemic economy. He concluded it was because of a local culture that made it difficult, if not impossible, to create institutions of any type. In short, institutions are important because they do stuff. Without them, modern life would be impossible.
Luckily, for the most part the deinstitutional thinking of “Internet intellectuals” has run up into the hard brick wall of reality. A key example is copyright laws, where despite the valiant efforts of activists it remains as deeply entrenched as ever, although there have been great strides made in voluntary copyright liberalization. Activists like Aaron Schwartz are subjected to (in my view, expectedly although not appropriately harsh) unrelenting state power. Moral arguments aside, good activism must be able to diagnose the nature of the enemy. In this case, copyright laws are held up by powerful organized interests such as corporations that are deeply invested in their maintenance. I don’t like or condone what is happening, but I understand it.
Another example of this tendency is in online education. To a certain degree, universities define what higher education should be. There is the idea that students select a major or field of study, which has a set curriculum, and then students pick specific topics and use accepted research methods to build on or challenge the existing knowledge. The idea of liberal arts education is that education isn’t merely for career purposes, but to broaden and deepen the human experience, attuning students to arts, ethics, and other dimensions of culture.
Needless to say this is in sharp contrast to many popular online education models. Curriculums are largely nonexistent, and the focus is on chopping up complex ideas into discrete snippets presented through short videos or exercises. The more nuanced practitioners are aware of these dimensions, but the loudest proponents of new online education models, inevitably from engineering or computer science, are throwing out the babies (curriculum, liberal arts, in-person seminars, the social experience of attending college, etc) with the bathwater, because they never valued the baby very much to begin with. For them, education is a set of discrete, narrow skills that are valued in industry. Full stop. The rest is for your spare time.
I am not a luddite opposed to technological innovation. However, institutional reform should be done carefully, not unthinkingly. We accept “disruption” in the private sector, but always under the watchful eye of regulations. But private businesses merely (for the most part) make stuff. Government and educational institutions play a much central role in our society, and we may be blind to their deep and nuanced functions. They are not all good or bad, and certainly should not be above thoughtful reform. In fact, I wrote previously that technology-centered efforts to reform city government are significant specifically because they are at least interested in the important issue of government reform.
I don’t think most of this is very original or profound, and the naive anarchism of Internet-centric people is legendary. However, what is surprising to me is the extent to which is has penetrated MIT, an elite educational institution. In fact, the entire MIT Media Lab, by design independent from formal disciplines, has been engaging more and more in social questions with limited links to existing scholarship and theories in the areas they are moving into.
Oddball centers exist throughout academia if you look hard enough, and maybe some of them are actually onto something the rest of us are missing. The point though, is universities share a spirit of inquiry and examination. All knowledge should be provisional and open to scrutiny. Therefore this is a direct address to the MIT Center for Civic Media. What have I got wrong? How do you engage in previous research, and if you ignore it, why? Not everyone shares your assumption that everything is totally new. I’s a too-rare academic willing to entertain the idea that some things may be, and new models are needed. Let’s meet in the middle and see where it leads.
Posted: September 9th, 2011 | Author: Rob Goodspeed | Filed under: Government, Technology, Urban Development | Tags: cybernetics, smarter cities | 1 Comment »
Periodically I come across an old article that seems very relevant to the present, such as the article about public sector innovation I posted in January.
The ongoing expanded use — and declining cost — of sensors and computing technologies has sparked a renewed interest in using them to solve persistent urban problems. A similar wave of interest occurred during the early history of digital computing. In his influential 1950 book, Norbert Wiener popularized the term “cybernetics” to refer to the emerging science of communication and control of organized systems. If the city is an organized system, then cybernetics in city hall would involve creating information feedback loops to be used by the manager (or “actuator”) to minimize the effects of disturbances and maximize achievement of urban goals. Sound familiar? It should: IBM inked a multimillion dollar deal to open a real-time “public information management center” in Rio de Janeiro (right) as part of their smarter cities initiative, and Wired magazine is keeping up a drumbeat about the power of feedback loops.
In an astute article published in Science in 1970, E.S. Savas considered the challenges this approach might face in the real world of New York City government. I don’t doubt the importance of real-time control for management tasks like transportation system management and emergency response, but the article describes some important challenges such a system would face if applied more broadly. Savas described how the five elements of the cybernetic loop would play out in the city: (1) dynamics of urban government, (2) information system, (3) administration, (4) goal setting, and (5) disturbances.
1. Dynamics of city government: The election cycle faced by big city mayors would limit the range of solutions considered, resulting in smaller goals and visible acts, which “may be more symbolic than effective.” Government itself is very slow-moving and one solution — delegating power — may have unintended consequences.
2. Information system: Arguably today much more information is available than was in 1970 about what’s happening in the city. But another crucial input is as tricky as ever — gauging the will of the people.
3. Administration: Making a decision is one thing, but implementing it requires an administration with appropriate personnel and structure, a well-known weakness of big-city bureaucracies.
4. Structure of government: Not only are city governments organized in anachronistic ways, the article omits another key fact: the fragmentation of powers. In Boston, for example, in addition to municipal fragmentation itself, separate entities manage many utilities, the transit system, parks, etc.
4. Goal setting: Identifying a common set of goals may be impossible. The chief executive can use judgement, but it is for good reason that power is delegated to elaborate systems of commissions, boards, and advisers on many topics.
5. Disturbances: These are unpredictable, often external to the city, and often not visible to the public (who sets the goals) until it is too late to prevent their impact. (e.g., climate change)
There are, in general, two responses to most of these concerns. Savas himself took one approach: give up on city government and advocate for privatization of service delivery. Presumably the cold logic of the profit motive would sweep away administrative, regulatory, and decision-making quirks of city governments. The other approach is to attempt to reform the government. In fact, IBM staff have admitted the “challenges” that will face a contemporary agenda for cybernetics. I think the need for contemporary urban government reorganization and reform is acute in many cities, but interest in it seems limited.
Notably, neither of these approaches truly addresses the challenges posed by the short time-horizon of elected officials, difficulty setting goals or forming consensus, and unpredictable disturbances. These three point to the need for planning to solve urban problems: a multi-stakeholder process involving analysis, deliberation, and solution design that both forges a consensus about the definition of a public problem and crafts a desired solution. It seems to me that in the face of the enormity of the challenges we face we need both smart planning and an efficiency-driven smart cities movement willing to push for reform but respectful of democratic systems.
> E.S. Savas in Science magazine, 1970: “Cybernetics in City Hall“
Posted: October 17th, 2010 | Author: Rob Goodspeed | Filed under: Government, Technology | Tags: crowdsourcing, Gov 2.0 | Comments Off
Lately I’ve been involved in a lot of conversations about crowdsourcing in the public sector. Although they’re sometimes confused, in general I think there are two types we can talk about: crowdsourcing policy (or ideas) and public goods (tangible work or services). This is a topic included in my Open Government Strategy for the City of Boston.
The best analysis of private sector crowdsourcing of ideas is this recent article in the Sloan Management Review. The researchers analyze three crowdsourcing projects: Linus, Wikipedia, Innocentive, and Threadless. By breaking down the organization of each case, they make clear these projects not a utopian creative free-for-all, but instead a carefully constructed set of rules and practices that combines forms of decision-making, creativity, and incentives in new ways to create new ideas. For example, Wikipedia relies on the decisions of editors for disputed articles, Threadless users vote for the best ideas, and on Innocentive businesses to pick winners. In each the rewards to the contributors differ, but exist even when they are non-monetary, often in the form of “love” or “glory.”
Two examples of policy crowdsourcing are Peer to Patent and Next Stop Design. Peer to Patent opens patent applications, with the permission of the applicant, to a pilot system which allows the public to contribute to the research on “prior art.” The idea is by allowing experts to contribute to this process, they can accelerate the work of the Patent Office in determining which ideas deserve patents. The project was founded by Beth Simone Noveck, a professor at New York Law School who leads the Obama administration’s open government initiative. The project is successful because it enables topic experts to conveniently contribute information that expedites the official process. However, it remains a voluntary pilot project and has not been taken to scale for the entire government.
The Next Stop Design project, launched by researcher Daren Brabham solicited designs for a Salt Lake City bus stop from around the world. Daren, now a professor at UNC Chapel Hill, wrote a PhD dissertation about public sector crowdsourcing. If you can access it, he lays out his approach in a recent article in Planning Theory. He argues that crowdsourcing can replace conventional approaches of citizen participation:
In essence, any urban planning project is predicated on a problem. Typically that problem is how best to accommodate changing populations with different infrastructure, all while considering the interests of residents, developers, business owners, and the environment. If a problem can be framed clearly, and if all the data pertaining to a problem can be made available, then that problem can be crowdsourced.
Since I’d argue most planning projects involve multiple, contested problems, I’m not sure crowdsourcing can replace a host of existing theory and approaches. However, where the problem contains a significant design element, and the boundaries are noncontroversial (such as a bus stop), it may be an excellent strategy.
Finally, what about crowdsourcing public goods themselves? In the words of Tim O’Reilly, can government be a “platform for greatness”? Last month I argued such thinking was silent to the realities of government: power is divided between agencies, it’s run by politicians, and most people may not agree this is the way to go to begin with. The problems seem more surmountable at a local level. Mitch Weiss, the Mayor of Boston’s Chief of Staff, raised the issue at a provocative talk at the Rappaport Institute titled “How “Peer-Produced” Government Can Help Fill Potholes, Save Cities, and Maybe Even Rescue Democracy.” I worked with him last summer, and I think their initiatives to release data and improve citizen’s ability to communicate with government has been very positive. However, I’m not sure the city will ever be coordinating peer-produced services.
Even if we can overcome the formidable institutional and political barriers, there are good reasons why governments may never be directly involved in facilitating the peer-production or crowdsourcing of public goods. I encountered a good explanation about why this summer at the iGov Research Institute. Bas Kotterink, a researcher with the Netherlands research organization TNO, proposed the following hypothesis in a presentation:
Governments are not geared for co-creation. Instead, they should facilitate and monitor user and company-led innovation of public tasks with a more proactive role in democracy (inclusion) and enforcement, protecting basic human values such as privacy and dignity.
He argued that since the rules of private and government action are so different, initiatives at either extreme are the most able to product public goods. Mixtures of the both – such as some e-participation initiatives – are doomed to fail. However, he stakes out an important role for government. They can ensure minimum standards for key services are protected by punishing offenders and enforcing regulations, or providing it themselves when market failures occur. They can promote data standards and access to public data (such as in apps competitions). They can define and protect standards of individual privacy. Although they may not directly produce certain public goods as in the past, governments will continue to play a critical role we are only beginning to understand.
Posted: September 13th, 2010 | Author: Rob Goodspeed | Filed under: eGovernment, Government, Technology | Tags: Gov 2.0 | 2 Comments »
One of the most visible supporters of technical innovation in government recently has been Tim O’Reilly. Perhaps best known for popularizing the term “Web 2.0,” O’Reilly’s media company publishes popular software manuals and organizes industry-leading conferences for Internet entrepreneurs. In the past few years, he’s increasingly turned his attention to applying innovative internet technology to government, organizing in 2009 the inaugural Gov 2.0 Summit and Expo in Washington, D.C., events which bringing together high-ranking government officials and technology gurus.
O’Reilly’s agenda includes nothing less than the complete transformation of government. The internet has unleashed tremendous creativity through Web 2.0 websites, he reasons, so why can’t similar results be organized for government? The argument is presented as a chapter in an edited volume published by O’Reilly Media last may titled Open Government. The chapter, titled “Government as a Platform,” is also available online, and summarizes the argument he’s made in many blog posts and lectures:
Web 2.0 was not a new version of the World Wide Web; it was a renaissance after the dark ages of the dotcom bust, a rediscovery of the power hidden in the original design of the World Wide Web. Similarly, Government 2.0 is not a new kind of government; it is government stripped down to its core, rediscovered and reimagined as if for the first time.
And in that reimagining, this is the idea that becomes clear: government is, at bottom, a mechanism for collective action. We band together, make laws, pay taxes, and build the institutions of government to manage problems that are too large for us individually and whose solution is in our common interest.
Government 2.0, then, is the use of technology—especially the collaborative technologies at the heart of Web 2.0—to better solve collective problems at a city, state, national, and international level.
For too long government has been nothing more than a vending machine, O’Reilly argues, dispensing services to citizens in exchange for taxes. When we didn’t like what it produced, we resorted to shaking the machine — political protest. What we should be doing, O’Reilly argues, is creating a government which enables collective action, and captures the energy and innovation of the marketplace. In short, government should be a “platform of greatness,” coordinating and empowering individuals to serve the public interest.
The concept has caught on in some circles, embraced by groups like New York City’s Open Planning Project, a nonprofit dedicated to open source mapping software, open data, and democratizing the planning process, who included O’Reilly in a recent film about the value of publishing transit data. O’Reilly showed this film during his opening remarks at this year’s Gov 2.0 Summit, which concluded earlier this month. However, the opening began with a sober tone. After the enthusiasm of the first event, achieving Government 2.0 is “harder than it appears,” he conceded. However, O’Reilly said he still believes “Gov 2.0 answers the debate we’ve been having whether government is too big or too small … and creates the possibility of doing less and getting more.”
Indeed, technical innovations are slowly filtering into government. With open standards and the growth of sophisticated free and open-source technology, more and more proprietary and difficult-to-use vendor products are finally feeling healthy competition. Government data has the potential for improving journalism, access to services, and the evaluation of policy.
However, if we are to follow the metaphor to its logical conclusion, to truly reinvent government along different lines, what are the obstacles might we face? How might the lessons of Wikipedia, Facebook, and Youtube be applied to the ancient art of government? Unpacking these reasons may help explain why the path of government reform is a difficult one.
1. What’s a “Platform” Anyway?
In his thoughtful recent article “The Politics of ‘Platforms’” Tarleton Gillespie argues web companies use the word platform in a variety of ways. To regulators, they’re merely neutral platforms not responsible for the views expressed by participants and exempt from regulation. To users, it’s a privildged platform subject to detailed terms of service and censorship of offensive content. To other media companies, they’re lucrative platforms for profit. He concludes, “in other words, [these examples] represent an attempt to establish the very criteria by which these technologies will be judged, built directly into the terms by which we know them.”
In addition, all the private “platforms,” have some kind of internal governance who set the rules for participants. Whether groups of editors on Wikipedia or a corporate board, none institutionlize a type of governance anywhere near the complexity of real government. In fact, most are basically benevolent dictators with CEO’s held accountable by market forces. And if they can establish a monopoly, they’re only restrained by goodwill (such as “don’t be evil”) and any applicable laws. This issue brings us to the second obstacle.
2. The Federalist System
From outside of government, it’s easy to assume government has the power to do whatever it wishes so long as the elected officials agree and can obtain sufficient funds. Not true. Public power is carefully and deliberately divided between a bewildering array of states, agencies, municipalities, districts, quasi-public entities. As an example of this, some of the best ways to curb harmful externalities (like carbon dioxide emissions) are through taxes. In Massachusetts, cities cannot create new taxes without the approval of the state legislature. Period. The federal government often seeks to reform education. The only problem? Schools are run by local school boards. Federal education policy does have a variety of carrots and sticks at its disposal, but only local school districts control every aspect of schooling, or implement radically innovative new programs. This brings us to the next obstacle: who’s in charge.
It’s easy to think Federal agencies are out there advancing, say, transportation or health and human services, in a general way. To the contrary, they operate under specific legislative guidelines . At the local level, although more policy entrepreneurship is possible, it always occurs under the watchful eye of lawyers and generally subject to legislative intervention. In fact, in theory elected or appointed officials run the whole operation of government. Implementing Government 2.0 therefore must involve the hard work of crafting detailed proposals, lobbying, and promotion used by any interest group. Which leads to the most incorrigible force of resistance of all.
4. We the People
O’Reilly is confident his vision of Government 2.0 transcends ideology. I’m not so sure. Any proposal for how government should operate is inherently ideological. His is no different. It includes a celebration of the market, belief in the power of individual creativity, and a desire to get government out of the business of providing direct services. In these ways it can be said to resemble neoliberalism quite closely, although perhaps with an assumption collective action outside of the market is necessary. This ideology may seem appealing to technologists, but a host of Americans may think otherwise. Leftists may prefer the old vending machine (where we can ensure the quality of public services), and conservatives may want to continue to shrink government. Even if it costs less, they might argue, we shouldn’t be tackling some problems through government at all.
The goal of this post is not to deflate the momentum of Government 2.0 advocates, but temper their enthusiasm with some realism. Publishing open data about transit service does seem somehow new for government. Yet we should never lose sight of what’s happening: a marginal increase in convenience for citizens, and some modest profits for software developers. Organizing a distributed, crowdsourced alternative to the subway? If it were even possible, this would require the cooperation of multiple government agencies, breaking union contracts, re-writing state law, and convincing everyday citizens an alternative to the existing one-agency system is desirable.
For these reasons, achieving public benefits through technology is often easier to organize completely outside of government. For example, a grassroots movement to clean up Estonia in one day was very successful, but nearly impossible to imagine under the guidance of a government agency. (What about liability? What about union rules? Did the legislature authorize everything properly?)
In Boston, we’ve anxiously awaited real-time arrival data for buses and trains. However, I’m not sure how that relates to the health of the underlying service, with billions of dollars of debt and backlogged maintenance. Until we can figure out how to use technology to tackle those problems (Crowdsourced railcar maintenance? DIY track inspections?) Government 2.0 will remain a buzzword and not a true reform movement.
Posted: June 14th, 2010 | Author: Rob Goodspeed | Filed under: Government, Transit | Tags: Fees, Local Government, Public Finance, Taxes | 2 Comments »
Most American local governments and transit agencies are struggling to balance their finances. In addition to the economic downturn, revenues are often limited by property tax limits or political resistance to tax or fee increases. Meanwhile, costs have been ballooning. Many fixed costs, such as the price of energy, health care, and other employee benefits have expanded dramatically in recent years. The result has been a quiet crisis, which is sowing the seeds for future problems. Here in Boston, last year the MBTA only had enough funds to fix six of the agency’s 56 infrastructure projects that were ranked as most important for public safety on an internal 10-point rating scale. At the MBTA and other agencies, important upgrades and maintenance are being delayed. Similar patterns are playing out in cities and towns making hard decisions about upgrades to critical road, water, and other infrastructure.
However, the financial distress is also provoking government officials to create new forms of revenue generation. The most notable example relates to streetlights. Although desired by citizens for safety, the increasing cost of electricity has pushed several cities to deactivate them. In the face of citizen complains, several allow local residents to directly pay for the cost of their operation. Presumably the capital costs are “sunk” so the marginal cost of operation is electricity and a small maintenance fee. Concord, Massachusetts is deactivating many, and providing residents the option of sponsoring a streetlight of their choosing at the cost of $17 per light per month, or $204 a year. Colorado Springs, Colorado is making deep cuts to a range of public services, including deactivating over 30% of the city’s streetlights. (At right, crews disconnect a streetlight in the city.) As in Concord, citizens can adopt local streetlights at a cost of $100 to $240 per year, depending on size.
When I proposed this as a model for raising public funds to a group of graduate students recently, they immediately raised concerns. On the surface, the transaction does look like an example where government provides services for a fee instead of safeguarding the public interest. Will poor neighborhoods lose out? However, on closer examination I think the arrangement is more nuanced. Although the streetlight adopter provides all the funding, they by no means enjoy all the benefits. The nexus between who pays and who benefits is indirect at best. In both of the cases above, governments decided which streetlights should remain on for safety purposes (at intersections, for example). Officials in Concord admitted to adding streetlights where they may not be needed during better times. In addition, in already relatively homogeneous communities like Concord (where only 2% of families were below the poverty line in 2000), the equity effects are muted.
Policymakers could also affirmatively address such concerns, while allowing residents the freedom to fund their local streetlight. In this arrangement, the price charged would be slightly above the true cost (say 10%), and the extra money be used for lighting elsewhere in the city, or even to subsidize the purchase cost for low income households adopting their own streetlights.
Similar scheme could be used in transit. The public good funding model is in crisis, with too little revenue to support the desired level of service. One approach could be to create a discrete fund for the maintenance and improvement to each bus line and rail station. Anyone who wanted to – neighbors or otherwise – could donate to these “earmarked” accounts. However, the agency would implement a tax on the donations that would be shifted to the stations and lines receiving the least. That way institutions and riders could donate to support “their” station or line, and the system as a whole could gain access to scarce unconstrained funds.
Many bus and rail lines connect rich and poor neighborhoods, so donations for these may already have equalizing effects. This approach may be troubling because it may seem to encourage differentiated levels of service for different neighborhoods. However, the practice of paying for transit service in general is not unheard of, in Washington, D.C. sponsors of major events have made cash payments to keep the subway system open later in the evening. Private groups dedicated to augmenting public services are already well-established for parks (private conservancies), schools (PTOs), and urban neighborhoods (BIDs). Why not for transit?
I’m not sure whether such a scheme could be workable. However, in the absence of reform our current system is under stress. If we don’t like the alternatives (such as a simple fee for service), advocates for public transit and local governments should consider innovative alternatives that satisfy both the fickle voters and the public interest.
Thanks to Libby for the post title idea
Posted: May 11th, 2010 | Author: Rob Goodspeed | Filed under: Government, Technology | Tags: Gov 2.0, Government Data, Transparency | 3 Comments »
At an event I attended in March, Massachusetts’ Chief Information Officer Ann Marguiles raised a simple yet profound issue. Although they’re committed to open data, the Commonwealth was still to figure out which datasets to post online through their new data portal mass.gov/data.
Plenty of transparency advocates would say the answer should be “all of it.” However, I think this answer is unsatisfactory for a couple reasons. First, Massachusetts faces very real resource constraints. Administrative data is managed by hundreds of legacy systems across over 100 independent agencies. Many of these systems contain personal or otherwise sensitive data that precludes throwing open the doors, and requires time to create public reporting scripts. Second, the “free it all” position overlooks the government’s role as data collector. Plenty of information is collected and released merely as a public service: environmental data, population statistics, etc. Instead of just focusing on making paper records digital, we should discuss the larger issue: what types of information should governments make available?
I think there are several basic categories of types of data government should release. Each has its own logic, and a review of the categories can emphasize the multiple purposes of transparency.
1. Data “About the World” To Inform Research and Policy Debate
For a variety of reasons, governments often collect some of the most accurate and up-to-date descriptive data about communities. This includes vast array of geographic data, school and testing data, demographic data, employment and economic statistics, and more. It should be released primarily because it enhances our ability to create good policy, or collective understanding more generally.
2. Data Released to Improve Service Delivery
Some data should be released because it improves access to government services. This includes cases where the data itself is the service (e.g., research reports), but also includes more technical forms such as transit system data, government facility locations, and service details.
3. Data to Help Hold Government Accountable
A host of budget, voting, and performance data should be released to hold government accountable. However, metrics produced internally as part of stat-type programs introduces the problem of mixed motives. Why would governments want to release the data that can be used against them? This problem can be partially avoided by separating the data from the operations within the government organization. This concern also introduces the important issue of presenting information in accurate ways, and including metadata about definitions and collection methodology.
4. Data to Change Private Decisions to Achieve Policy Goals
In their book Full Disclosure, Archon Fung, Mary Graham, and David Weil argue many transparency policies fall into the new category of “targeted transparency.” Including mortgage reporting requirements, nutrition labels, and automobile crash ratings, these efforts make information available with the deliberate intention to achieve a public objective by influencing private decisions. These policies succeed when they provide people facts they want in the “times, places, and ways that enable them to act.” They stress these aren’t limited to policies seeking economic changes, but also include campaign finance reporting laws which work through political channels. Although implemented with the intention of reaching end users, the ease of citizens to access this data ranges widely. Some data are readily available, but governments rely heavily on intermediaries to analyze and present more complex (and politically-charged) data like the toxics release inventory or mortgage lending data from banks.
5. Data Posted to Improve Access Within or Across Government
Although it’s rarely discussed, I think an important use of available data is to help break down barriers within and between government agencies. This will be an unintended use so long as our governments are separated into layers and silos. This purpose explains why so much of the data on the HUDUser website are specific to certain policies or programs: the intended users are state and local governments and nonprofits, not the general public.
What do you think? Are these the right categories, or have I omitted something important?
> Data and Decisions in Government
> Does Data Matter in Urban Policy?
> What is Government 2.0?
Posted: April 7th, 2010 | Author: Rob Goodspeed | Filed under: Government, Technology | Tags: Data, open government | Comments Off
I posted an open government reading list to CoLab Radio, a blog run by the MIT Community Innovators Lab. Suggestions or comments are welcome!