Tuesday, December 9, 2008

A Fast, Effective and Efficient Emergency Communications Plan for the Obama Administration


David Aylward[1], December 4, 2008


Introduction

All the key reports on 9/11, Hurricane Katrina, and similar disasters have pointed to failures and weaknesses in emergency communications and information sharing as critical and primary problems. 9-1-1 and emergency communications and response systems remain largely stuck in the technology and mentality of the 20th Century resulting in the loss of life, emergency communications systems that repeatedly underperform and/or fail in major disasters, and overall system inefficiencies. The new Administration will face this one way or another. It can confront the problem as a major national and White House priority – and solve it. A strong, visionary effort by the Obama Administration to deploy next generation emergency communications and information technology will save lives and property, reduce injuries, protect homeland security, improve emergency medical care, and ultimately save money across a wide array of local, state and federal safety and related functions. Such an effort is also wholly consistent and supportive of the emphasis on investment in broadband and critical infrastructure outlined by President-Elect Obama. [2]

Alternatively, the Obama Administration can allow the current Bush programs and policies in a wide variety of agencies to continue. If it does the latter, responses to the emergencies that will inevitably arise in the next few years will be weakened and lives will lost – both in disasters and in daily emergencies. We have failed to make significant progress in all forms of emergency communications since 9/11, despite spending billions of federal dollars. The current morass of emergency communications policy is not due to a lack of emphasis and resources on homeland security at the Federal level; it results from a lack of direction and leadership that understands modern information technology. It results from a failure to treat the emergency communications system as a single, “virtual enterprise”, pursuing instead disjointed and uncoordinated efforts based on different public safety professions and historical perspectives.

The Obama Administration should apply its focus on modern information technology and 21st Century information sharing paradigms to dramatically improving our nation’s emergency communications. 9-1-1 agencies need to be able to receive data and video from the public, not just voice calls, and need to be the nerve center of a smart, all-hazards, Internet-protocol, interoperable and integrated emergency response system. Emergency organizations of all kinds could provide more informed response (and responders would be safer) if they had access to, and could share, video, text messages, car crash data, key personal electronic health data, building plans, extrication guides, traffic information, electronic maps, weather and hazmat data. These are all available electronically somewhere, but usually not to the brave responders who need them. From the first 9-1-1 call at the beginning of an emergency, to the patient’s going home from the hospital, and from the onset of a disaster to the communities’ recovery, we need to give all our responders access to all the information they need, when and where they need it.[3]

· We still lack real voice interoperability for and with the tens of thousands of emergency organizations[4], and data interoperability with almost all of them
· Public warning and alerting is a hodge podge of stove pipe efforts: locally, at the states, by various DHS initiatives, within HHS, DOJ, DOT, Commerce, the FCC, and the private sector
· Inter-organizational data sharing and situational awareness is spotty at the very best
· 9-1-1 can only accept voice calls; text, video and data beyond location and number cannot be received; it cannot send or receive information to or from “N-1-1” entities like 2-1-1 social services, 3-1-1 city services, poison control centers or 800 number crisis hotlines
· Local emergency IT systems rarely connect to private organizations and NGOs involved in emergencies, much less employers and the general public
· The National Guard and other military organizations typically cannot share situational awareness or other information with civilian organizations, unless they are using the same software application, which is usually impractical
· In providing billions of dollars in emergency communications grants to the states and localities, there has been a narrow focus on buying new “transport” (on communications networks and devices), rather than on Internet Protocol and the “application layer”, where both interoperability and information sharing linking legacy systems are far more easily and cheaply accomplished
· In direct federal expenditures for software systems, there has been a dominant focus on buying specific federal software applications and trying to force state and local agencies to use them, rather than providing network-centric tools that enable the linking of diverse legacy applications
· There are no major federal or state efforts to focus on network-centric solutions – what is needed “in the middle” to connect the legacy systems of tens of thousands of emergency organizations
· There are no major government-enabled efforts to establish common standards across all the domains (professions) involved in emergency preparation, response and recovery; the small programs that exist are not seriously funded or given priority; the handful of larger programs are domain-specific
· There are is no common “Yellow Pages” of emergency organizations, enabling routing of incident information to all the organizations in the safety eco-system[5]. Nor are there standards for them (the equivalent of DNS servers for email: .com, .net, .gov etc.); therefore every sender of emergency messages needs to know and keep up to date its own list of the recipient organizations, and how to send data to them, a recipe for incompleteness and inaccuracy
· To ensure security, there is no federated organizational access control and identity management service across all the emergency response domains, or standards for them, to record information rights policies, enabling organizations to know which ones are allowed to send and receive emergency information, or that they are who they say they are[6]; this has to be hard coded, creating silos by agency, jurisdiction or domain
· Huge amounts of local, state and federal money are being wasted on duplicative programs and functions due to the stove-pipe and balkanized approach that has been followed.[7]

Why is this? The primary reason is that traditionally emergency communications decisions have been made at the individual agency level. Whether it is local, state, or federal government, there is no senior official above (and with authority over) the individual agencies and professions in charge of the big picture of emergency communications. There is no senior federal or state official responsible for emergency communications and information technology, with the responsibility and budget to bring a coherent “virtual safety enterprise” analysis and architecture to the problem. No senior official is asking and demanding answers to the overall key questions, or insisting on overall systemic outcomes and efficiencies (as opposed to improving the functioning of single agencies or single professions).

There is no emergency response Chief Technology Officer. No one is looking at the total cost of ownership of emergency information and communications technology for the federal government, much less federal, state, local and tribal. If they did, they would insist on shared applications that would produce overall benefits. No one is demanding true “all-hazards” planning, design, and implementation. (DHS is mostly focused on intelligence and law enforcement; to the extent it does emergencies it forgets health, and only does disasters; HHS only does health IT, but not emergencies except a bit for disasters, and a lot for some participants in pandemics; DOJ only does intelligence and crime; DOT does traffic information, EMS standards, and a smidgen of 9-1-1; most state and local agencies focus on day to day emergencies; so-called state “interoperability committees” were formed to rationalize the expenditure of federal funds on new digital police and fire radio systems).

Despite tough talk from the Bush Administration about the importance of emergency, interoperable communications, no one in the White House or OMB ever exercised serious overall leadership, demanding a common vision, coordination and cooperation. The “e-gov initiatives” supposedly managed by OMB included two emergency information and communications technology (ICT) projects: Safecom and Disaster Management[8]. Neither ever attempted a comprehensive, inclusive view. Individual agencies addressed these issues for themselves in a piecemeal way. There are at least 17 separate programs within DHS, scattered through several divisions that claim to be addressing some of the above issues.[9] None are trying to address all of them. Staff who try to take an overall view, to contribute to the whole, have been shut down. DHS, Congress, the FCC and others have not effectively addressed the problems for jurisdictional, interest group and leading safety vendor reasons.

And what about the transition to the new Administration? Issues concerning emergency information, communications technology and information sharing are most likely mentioned, perhaps highlighted, in transition reports from teams covering DHS, HHS (HRSA and CDC), the FCC, and even the Corporation for Public Broadcasting. These issues could just as well be part of reports for DOJ, DOT, DoE, DOD (including National Guard), and DoA, as all five of those departments have emergency ICT issues, programs, and/or requirements. But these issues cannot be properly addressed within those individual boxes, even if within each agency they were perfectly coordinated. Agencies and program staff understandably resist taking on projects beyond their boundaries and they have little incentive to do so.

Suggested Action Plan

Summary of Expected Outcomes, Action Items and Key Principles

The Obama Administration should not follow the failed models of the past. It should:

· Recognize that solutions to these issues are the compelling safety reasons and uses for the broadband deployments President-elect Obama has committed to encourage
· Make emergency communications a signature challenge and project for Administration, leading from the top, including the new overall CTO
· Recognize there are tangible day-to-day and disaster safety improvements that can be delivered to the public in the near future as we work toward a fully integrated, interoperable emergency response and communications system. The latter will take a long time and a lot of hard work, but early successes will speed adoption
· See how at a time of enormous demands on government budgets at all levels, better communications and information technology can be delivered at less cost, following the path that the commercial sector has blazed
· Harness and redirect projects already underway and undertake new ones to make rapid, tangible progress on emergency communications, with clear benefits to both the public and emergency responders.

Expected Outcomes

In the first 18 months the Plan should:

· Deliver rapidly an all-hazards, all levels of government and the private sector, public alert and warning system linking all outlets to the public, enabling them with the multi-use core services discussed below[10]
· Deliver rapidly a handful of other high value, high profile shared, managed information services, enhancing informed emergency response and situational awareness, and pointing the way toward an exciting, safer future. Two of these should be:
o Rapid, flexible and inexpensive voice interoperability through the use of software
o Rapid, inexpensive situational awareness for all authorized organizations
· Enable the above with the development of, and standardization of, two key shared “core services”[11] required to enable interoperability over the entire safety enterprise:
o Who are the participants? Agency locator/GIS-based registry of organizations involved in emergencies needed for sharing of information before, during, and after emergencies of all magnitudes among local, state, federal and private entities responsible for emergency response
o Security. Access control/identity rights management and related security services where the above referenced organizations are registered and given appropriate authorizations to send and receive emergency information

Action Items

· Assemble immediately a team of paid and unpaid experts who can deliver the broad-based policy, technical, operational, political and economic planning needed to accomplish these suggestions
· Assemble all the mentioned federal agencies into a federal team with a designated White House leader, with OMB support, and the clout to require cooperation
· Assemble a working group of the above and leaders of state and local agencies, leaders of the affected professions (see two bullets down), NGOs, and relevant private sector leaders; this group needs to be well populated with experts without a corporate or specific constituency “axe to grind”
· Approach emergency response overall, as an end-to-end system, focused on outcomes for citizens, under all hazards, and on overall costs and benefits, rather than as a collection of discrete response professions (e.g. police, 9-1-1), emergency problems (e.g. nuclear, pandemic) or specific communications methods (e.g. P-25 statewide radio network, wireless broadband)
· Define safety/emergency response broadly and inclusively as an overall “virtual enterprise”. Require planning and Total Cost of Ownership analysis according to this enterprise and with all the participants at the table. This should include the traditional first responder professions (police, fire, EMS), other emergency response professions, agencies and NGOs (e.g. 9-1-1, emergency management, hospitals/trauma centers, public health, Red Cross, poison control, mental health, and other “N-1-1” entities), government emergency support professions (e.g. transportation, public works, IT, schools), critical infrastructure providers, the media (especially public broadcasting), and other relevant participants (e.g. the Chamber of Commerce). It should include both wired and wireless communications, with a heavy emphasis on Internet Protocol software based solutions (the “Application Layer”) that link legacy systems, instead of buying all new systems
· Require recipients of federal funding to implement a broad, inclusive definition of “interoperability,” beyond traditional voice radio interoperability restricted to “first responders”, which definition advances interoperability of voice, data and video communications among all entities involved in emergency response. Similarly legislation should allow funds to be used not just for “equipment”, but also “software and services”, allowing the use of shared IP-based emergency service networks and services
· Establish an inter-agency working group to coordinate the distribution process and eligibility criteria of all sources of Federal funding for 9-1-1 and emergency communications
· Provide funding for the shared IP emergency communications backbones and associated services in each state and/or region needed by 9-1-1 and all other emergency entities to support Next Generation emergency communications; coordinate this backbone need with the ongoing development of a national 700 MHz public safety broadband wireless network currently being planned by the FCC
· Initiate Total Cost of Ownership thinking and analysis for this virtual enterprise – where federal money is involved, require planning decisions as if there is a single owner; the public will benefit
· Involve citizen groups that respond to end to end, citizen-focused approaches, including the American Heart Association, Brain Injury Association, American Automobile Association
· Involve the organizations that represent individual responders, not just the agency chiefs: Emergency Nurses Association, EMTs, firefighters, FOP for example
· Adopt “everything over Internet Protocol” and open architecture requirements
· Significantly broaden the narrow focus on buying new networks to achieve interoperability, and generally on communications “pipes” (the “Transport Layer”)[12]
· Add a major focus on the Application Layer, and specifically on managed, shared services that do not require capital investment or sophisticated IT staffing by the mostly small emergency agencies
· Provide major support for the creation of data dictionary, messaging and other standards serving the whole virtual safety enterprise, not just parts of it – by those safety professions[13]
· Rather than trying to upgrade the tens of thousands of individual agencies, which has been proven to be horrendously slow and expensive[14], focus on the “middle”, the network-centric applications that can enable more informed, interoperable and situationally-aware response[15]
· While the above is going forward, launch on a parallel track a “Solutions Delivery” Project charged with delivering broadly useful, interoperable (and interoperability-enabling) solutions rapidly, in part to show the value of this approach to the public and the other affected constituencies. These solutions need to be delivered over large enough regions or uses to have an impact on current thinking. In the future, most of these services should be self-supporting through subscriptions from the public and private sectors. The government should cost share for the development and deployment of these components, bear the initial cost of developing the policies (business rules) at all levels of government to use them, and train staff to use them.

Emergency Communications Solutions Delivery Project

This will seek to implement the agenda described above. It is designed to successfully and efficiently address in 18 months a handful of salient emergency communications issues. It is designed to (a) enable initial interoperability between legacy systems, (b) address the “bottom end” of the market (those agencies, organizations and persons with the least resources), (c) encourage cooperation and information sharing between appropriate entities, and (d) provide software tools, but leave policies and information sharing decision making in the hands of the appropriate levels of local, state, tribal and/or federal government. In all cases, the elements must be standards-based and fit in an open architecture so that solutions can freely interact with applications and networks controlled by other entities. It will include the following elements:

Agency and Consumer Software Services and Applications

· Interactive alerting/public warning. Today there are a multiplicity of use-built, stove pipe warning systems, and we are adding to them[16]. We need to link these legacy systems, not replace them. The way to do that is with common message standards and shared core services, not building new systems or stand alone products. We need to be able to reach the public and vice versa both through social networks of all kinds, and more formally through established agencies, services and businesses. We need to connect official sources of any hazard, any area alerting to any and all systems of communication in use by the public[17], and offer the public at least one attractive messaging service for personal emergency use as well as communications to and from government entities[18]
· Map-based, web-based emergency message generator and receiver application for budget-challenged and volunteer emergency organizations [19]
· Public information distribution after initial alerts, taking advantage of the trusted and ubiquitous public broadcasting system and all its digital assets as both part of the initial alerting process, and then afterwards as a public destination and distribution system for information

Shared Services[20]

· Voice communications interoperability, using sophisticated software to link dynamically any kind of wireless and wireline communications to any other kind of communications,[21] supported and enabled by core services
· Family emergency preparedness: intensive communications of the content already developed by ready.gov and leading partners such as Sesame Workshop[22]
Simple situational awareness, using incident and related information delivered to a locally-specific electronic map[23]

Enabling Services (“Core Services”) [24]

· Develop and standardize[25] the key shared “core services” that will allow efficient interoperability over the entire safety enterprise, helping connect networks and applications into what an FCC report called an “internetwork”[26]:
o Agency locator/GIS-based registry of organizations involved in emergencies[27]
o Access control/identity rights management and related security services[28]

· Provide funding to develop best practices for registering organizations and rights management policies[29] in the core services, and having legacy messaging and RoIP systems interact with them

· Provide and host middleware for intelligent message brokering, security, and auditing of compliance with access control and other rules.[30]

The above applications and services need to be hosted and offered as managed services on a subscription basis. They need to be managed in secure, reliable and sophisticated platforms that also offer agency customers service and billing functions. Appropriate subscription fees can and should be charged.[31] Each of these components must be architecturally independent so any application using the designated standards can be interoperable with them. Each will be able to initiate and process information using the DHS-sponsored international OASIS CAP and OASIS EDXL family of emergency messaging standards[32], along with other standardized data dictionaries and forms (e.g. National Information Exchange Model). Organizations receiving federal emergency funds should not be forced to use any of them. They should be free to use legacy systems and acquire new ones of their choosing, as long as they:

· Convert voice and data communications to the outside world into Internet Protocol[33]
· Connected redundantly to Internet Protocol networks, ideally backbone networks shared with other emergency organizations
· Ensure their legacy and new applications and systems interface to these standards.[34]




[1] David Aylward is a founder and Director of COMCARE Emergency Response Alliance (www.comcare.org), President of National Strategies, Inc., and a former Chief Counsel and Staff Director of the US House Subcommittee on Telecommunications, Consumer Protection and Finance. He has worked on emergency response issues for more than a decade. daylward@comcare.org. This paper is a living document; comments are welcome.
[2] This paper draws heavily on the analysis and near term suggestions for progress made in an article prepared by the author and the former Inspector General of DHS, Clark Ervin. See Clark Kent Ervin and David K. Aylward, “Next-Generation Inter-Organizational Emergency Communications,” Aspen Institute (sponsored by the Ford Foundation), December 2006, http://www.aspeninstitute.org/atf/cf/%7BDEB6F227-659B-4EC8-8F84-8DF23CA704F5%7D/Homeland_InteroperabilityReport.pdf
[3] A short video at www.comcare.org/video.html provides a vision of how emergency medical response could work if enabled in this fashion. Unlike the faster progress described in this paper for public alerting and warning, and a few other capabilities, that vision is achievable in the medium term (3-4 years).
[4] There are about 120,000 independent emergency response and response support organizations in the United States, not counting more than 100,000 schools, and other NGOs and businesses that are involved in emergencies.
[5] The FCC’s Network Reliability and Interoperability Council VII’s 1D Report called for these “Facilitation Services” to be established several years ago. This paper and others now call these “Core Services”.
[6] See footnote 5.
[7] Aside from safety and security benefits, an extremely interesting study would compare the Total Cost of Ownership to local, state, tribal and federal governments of the current siloed systems under the control of their multiplicity of masters, with the TCO of a modern efficient system based on Internet Protocol where backbone networks, enterprise services, and other appropriate functions and costs were shared, while customer premise applications were required to communicate externally using standard messages.
[8] The author’s organization, COMCARE, was a contractor to Disaster Management for two years, helping develop standards for communicating data between the diversity of emergency organizations.
[9] See, e.g., DNDO (CBRNE), S&T UICDS, S&T SAFECOM, S&T Disaster Management (now renamed), S&T assorted emergency response ICT projects, FEMA IPAWS, FEMA Office of Communications, FEMA OPEN project, FEMA Interoperable Emergency Communications Grants Program, FEMA CEDAP, FEMA NIMS, Homeland Security Information Network, State Homeland Security Program (SHSP), Urban Areas Security Initiative (UASI), Metropolitan Medical Response System (MMRS), Citizen Corps Program (CCP), CIO’s NIEM.
[10] These are described in some detail in the Aspen Institute article cited in footnote 2.
[11] These were strongly recommended in 2006 by the FCC’s Network Reliability and Interoperability Council VII 1d report, see http://www.nric.org/. The need for these core services has been noted by a variety of emergency organizations and papers. See, e.g. HHS’ Health Information Technology Standards Process (HITSP) Interoperability Specification 4; NENA Next Generation 9-1-1 paper, 2007; Network-centric Operations Industry Consortium Network-Enabled Emergency Response Project papers. See also, www.comcare.org/Core_Services.html. COMCARE has done a great deal of requirements work with all the emergency professions for core services over the last several years. It has produced detailed functional and technical requirements, and designs that meet them. As yet, no IT company has built an alpha version to allow a standardization process for them developed by the Open Geospatial Consortium and COMCARE to proceed.
[12] See the COMCARE blog article by the author, “Why Doesn’t the Government Get Emergency IT?”, September, 2008, for an exploration of this issue. http://comcare-talk.blogspot.com/. Certainly operability is an important issue, but it is not the only one. And certainly there are some agencies which are not close to and/or cannot afford wired broadband connections, but those are in a small minority.
[13] There are a number of standards efforts, but they have all been either entirely underfunded or stovepipes (one or only a few emergency domains), or both. The largest is the AHIC/HITSP effort funded by HHS to develop standards for electronic health records. It includes, but is not focused on, emergencies. DHS never gave the money or direction to the National Information Exchange Model (NIEM) to become more than an adoption of the pre-existing, longstanding justice community taxonomy efforts. The DHS Disaster Management Program got off to a positive start gathering all emergency responder professions around a table to develop detailed draft common messaging standards, a number of which are now official OASIS international standards. But it has been given little funding and has produced no additional standards output in almost three years.
[14] Consider the 14 year “Trail of Tears” to upgrade 9-1-1 centers to receive a tiny amount of data along with wireless calls: latitude/longitude and call back number. Around 20% of the centers, covering around 30% of the land mass, still cannot receive that data.
[15] The US military has a long and deep history in the area of network-centric information technology operations. Faced with a new requirement for the military (regular and Guard) to be interoperable with civilian organizations in the US and abroad to allow assistance in complex human disasters, leading military contractors are exploring how to help US emergency agencies solve these problems. See the Network-centric Operations Industry Consortium’s (NCOIC) Network-centric Enabled Emergency Response project which is focused on core services.
[16] For example, the WARN Act has agencies establishing a one way alert capability that will use public broadcasting infrastructure only to get to stations and cell companies, only to deliver to cell phones, only up to 90 character text messages, and only for Presidential alerts, “life threatening” emergencies, and “Amber alerts”. There is nothing wrong with that use case and pathway conceptually as a component of a comprehensive system, but that is not how it is being constructed.
[17] All these systems need to be able to register for this purpose in the agency locator core service, and have their authority to send and receive messages about different kinds of incidents over different geographic areas registered in the access control core service.
[18] This is the messaging equivalent of using 9-1-1 for cell phones. The government will benefit if it has large audiences of private users it can reach in emergencies, and if systems such as these have a standardized way of accessing government services, including but not limited to 9-1-1.
[19] The functional requirement here is simple. We need to allow any authorized organization to initiate and receive a message concerning a specific incident type and a specific location or area of any size, using the OASIS EDXL Distribution Element or OASIS Common Alerting Protocol. If the DE is used, it can carry a digital payload of any standard, not just OASIS messages.
[20] “Shared services” are (generally managed) services shared between multiple, but not all, emergency domains.
[21] See http://www.comcare.org/RoIP.html. Cisco, Twisted Pair Solutions, and others have commercial software that does this.
[22] Grover and Rosita are the stars of an excellent new DVD-based package developed by Sesame Workshop that teaches kids of all ages (including very young ones) and their parents how to prepare for disasters. It was announced in September, 2008, but has not received wide circulation.
[23] The fastest, cheapest approach to common situational awareness is to share incident and related data about the affected area on a web-based map. For those agencies with limited resources the mapping need not be expensive. One state has taken the lead in this area using Google Maps. See “Virtual Alabama”. Maryland has pursued a higher end, non-incident, proprietary version of this idea with the GIS company ESRI; its acronym is MEGAN.
[24] These are managed, standardized services offered to all emergency domains.
[25] The Open GeoSpatial Consortium, supported by COMCARE, has developed a detailed plan to do exactly this: in an open, inclusive process develop standards for the content and querying of the two core services described here. It awaits a leading company to build and test the first alpha versions of core services meeting the developed requirements.
[26] See FCC Network Reliability and Interoperability Council VII, Report of Group 1d, 2005, www.nric.org.
[27] COMCARE has undertaken years of functional and technical requirements development resulting in a detailed design for the agency locator registry, called “EPAD”, and detailed technical and design requirements for the companion access control/identity management core service . www.comcare.org/epad
[28] See www.comcare.org/core_services.html.
[29] Core services do not make rights policies; they simply provide a software application in which to record those policies and implement them efficiently. The rules themselves are made by the appropriate body for the incident type and geographic area.
[30] This is a possible use for the new version of FEMA’s OPEN message broker that is being redesigned now.
[31] The point is not to preclude customer premises software, but to create managed service options so that high quality information technology is available to all emergency organizations, even those that lack the budget and expertise to buy their own software and manage it.
[32] For complete versions of the standards, see http://www.oasis-open.org/committees/tc_home.php?wg_abbrev=emergency. FEMA’s Disaster Management Program funded the process that developed the detailed specifications for these standards. It was subsequently shifted to DHS’ Science and Technology Directorate which also has some other, uncoordinated standards efforts (e.g. a “CAD to CAD” project of some justice and 9-1-1 agencies).
[33] Modems are relatively inexpensive; many agencies already have them.
[34] This is not a new idea. Federal DHS, DOJ and DHS grantees are already required to acquire software that can interface with their various standards.

Tuesday, December 2, 2008

Broadband: Necessary but Not Nearly Sufficient

David K. Aylward

Today a wonderful cross section of folks who care about communications will come together to announce a National Broadband Strategy Call to Action. It is a terrific and important undertaking. Indeed, since at least the summer of 2001, COMCARE has actively advocated ensuring for all emergency organizations what we had earlier done for the schools: connecting them all to broadband. We must start with "everything over IP." So we strongly support this initiative led by our friends at the New America Foundation and others.

The paper that will be handed out today references the benefits of broadband to public safety and health care, among a list of other areas. In much more detail, Bob Litan did a presentation last month on the health care benefits that are possible if we can get medicine into the digital information sharing age. His focus as well was on the need to deploy broadband. Throughout 2008, the FCC has worked to try to figure out how to get wireless broadband to safety agencies. Certainly organizations cannot begin to take advantage of the increasingly rich information environment in which they sit without it, much less share that information with other entities and any staff in the field.

It is also true that broadband is necessary but not nearly sufficient. Hooking up 6500 9-1-1 centers and 30,000 EMS and fire agencies to broadband does very little on its own. Almost every hospital has broadband today, but information sharing with other parties is minimal.

Unlike individuals, to get the benefits of electronic information sharing in the broad, diverse and highly balkanized safety area, we need an equal (if not greater) focus on the application layer, on software and related issues. Earlier this fall I spent some time ruminating on why it is that in our policy debates (and indeed our policies) there is an almost singular focus on communications and broadband (the transport layer), and little to none on the information technology, the software.

Those thoughts are in my September posting which I have updated a bit. This seemed like a good time to underline those thoughts again.

Friday, September 26, 2008

Why Doesn't Government Get Emergency IT?

Why Doesn’t Government Get Emergency IT?

David Aylward, September 26, 2008

Achieving integrated and interoperable emergency response systems requires that the participants connect at the transport layer (communications pipes connect) and at the application layer (simplistically, software and its interactions). (For purposes of clarity of discussion, I am simplifying to two layers in the stack).

For the past several years, in the emergency space there has tended to be a very strong focus on the transport layer, almost ignoring the application layer. This translates into "building interoperable emergency networks and systems" as opposed to "linking legacy systems with software." The first is very expensive, and can't be the solution anyway as all the relevant organizations are never going to all be on the same network, using the same radios. But yet we continue to pour billions of dollars into building new pipes, while starving the application side.

In industry terms (again simplistically), we have been choosing the telecommunications industry over the information technology industry.

The commercial and military worlds are way, way ahead of what I call the "virtual safety enterprise". They are well down the road towards network-centric operations, cloud computing, managed services, service oriented architectures, and the like.

This imbalance is a very, very big deal because the only way to make rapid progress on inter-domain, inter-jurisdictional, and inter-everything else safety information sharing is to focus on the application layer: convert every communication into Internet Protocol and focus on what needs to happen "in the middle" and with "interfaces to the middle" instead of the end points (what happens in and at different agencies). (That doesn't mean transport is unimportant, but we already have lots of it, in lots of different flavors. My argument is not to ignore it, but to have balance.)

I have been ruminating on why we have this imbalance. Why do so many in the Executive Branch, the FCC, and emergency response communities make this choice, focusing almost solely on transport: in talking, writing, policy making and grant making?

It struck me that the answer is in our history since around 1978.

"Communications" has been under the purview of government for a long time. Information technology mostly grew up outside of government (DARPA aside). So people in government in this area learned communications; IT has been a side show.

I grew up professionally with a Congress and FCC that focused on wired and wireless pipes. All our discussions and debates were about the pipes. The great battles of the 1970s and 1980s over competition were about competition in telephony. In the late 1970s, in the Computer II decision, the FCC explicitly declined to regulate information technology, information services. Those were "those other things". From time to time it has drawn the same line deeper and deeper.

And until recently, the IT industry was very happy to grow up in California and elsewhere and not have to go to fundraisers for Congressmen every night.

"Public safety" at the FCC for decades has meant (almost solely) spectrum for radios for first responders. Thus, we have witnessed a great debate in and around the FCC this year about developing an "interoperable wireless broadband network for public safety." (It seems to have escaped the attention of many that safety agencies don't exchange much data today, much less with the field, much less amounts of data that require broadband.) Almost every time a reporter writes on the topic or a Congressman addresses it, they note it will be a solution to the first responder interoperability problem. Little to no attention has been paid to the critical application layer issues that would allow data to start flowing between agencies on the broad band networks that already exist. Nor has the government made a top priority the application layer issues needed to link legacy radio and wired networks to each other, much less to the new P-25 digital trunked radio systems that billions of our tax dollars are being spent for, much less this new broadband network if and when it gets built. (But kudos to the small progress being made due to grassroots leadership).

The FCC isn't just regulatory. It controls large amounts of money. It recently announced it was spending over $400 million of the Universal Service Fund on rural medical networks -- every one of which was mostly new transport capacity.

When DHS was formed and initially reached out to the traditional first responder community, not surprisingly it got the same communications-focused answers. Its policies ever since have reflected this. Billions of dollars are being spent directly and through the states and localities to buy new emergency networks (mostly radio); a few tens of millions have been spent on application layer issues (and mostly on end point, or area specific, applications, which should not be a priority).

The National Telecommunications and Information Administration at the Department of Commerce is about communications and spectrum. So when Congress handed it the lead role on the special $1 billion in "interoperability grants", what did it do? It let those one time dollars be spent almost entirely to buy new radio networks. After major lobbying, software solutions were allowed and cost/benefit analysis would have required them, but NTIA had no stomach for that. This was in part because it does not have a great deal of IT expertise, and that is because it doesn't have real IT jurisdiction.

The safety market is relatively small, and so balkanized in its decison making and purchasing (120,000+ individual agencies), that it is not an easy market to crack. Nor are the individual domains (EMS, 9-1-1, fire, police, transportation, emergency management) calling for integrated emergency information services amongst all of them. Nor can I find anyone in power in government taking that overall view. After all, as discussed above, most of those in government have communications training and responsibilities. Federal budgets and programs are about communications. The government IT people are generally elsewhere.

So it has been simple for the big IT players, who now have large DC presences, to mostly ignore the safety market.

If Google, Yahoo, Microsoft or their ilk took on the safety market, treated it like a virtual enterprise, and developed standards-based managed application layer services for it, could they cause huge leaps forward in service to the public in emergencies large and small (and major overall cost savings)? Absolutely. Could they make a pile of money breaking down the wall between the public and emergency response (e.g. making sure my health records stored at Google were available to 9-1-1, EMS and the trauma center when I get hurt). You bet.

But from their perspective, sacrificing the small submarket today and a larger potential future one are cheap prices to pay to avoid the federal world the telcos have to inhabit. Plus they have a lot of other things to do.

Tuesday, September 2, 2008

NPR Leads A Web Response to Gustav

Here is an interview from National Public Radio's news service which I thought was worth sharing. Using his experience from Katrina, NPR's lead Web 2.0 guy, Andy Carvin, has led a crash effort of volunteers to provide various informational support services for the response to Hurricane Gustav. David Aylward


September 2, 2008



NPR's Andy Carvin on the Role of Social Media in Gustav Coverage



Al Tompkins



Andy Carvin's job, as the senior strategist for social media at NPR, is to build bridges between NPR and its fans and social network users on places like Twitter and Facebook. Carvin once defined "a truly great blog" as a place where a community forms, and where members find themselves almost compelled to join the conversation.





NPR's Andy Carvin

For Hurricane Gustav, he has led 500 volunteers putting together the Gustav Information Center, which includes a Wiki and a site called "Voices of Gustav." The Voices site is set up to accept calls from people who have been displaced, with the idea that volunteers would transcribe the calls and post them online in a searchable format. That effort tapped into the Utterz Web site. The effort includes three Twitter feeds including GustavAlerts, which is a breaking weather feed. GustavNews follows news stories and GustavBlogs focuses on how blogs are reporting the storm. Another another team of 50 or so volunteers is working on transcribing reports from ham radio operators and other radio scans.

You will notice, by the way, that nowhere on the Gustav Information Center do you find an NPR logo, a link to NPR or any mention of NPR at all. It is a product by the people and for the people.

Carvin tells me that he thinks of Twitter as a citizen generated wire service while the wiki is more like a reference desk.

Several times over the last few days, the volunteers have drawn on their experience of working with Carvin in building Katrina Aftermath. That groundbreaking site encouraged people to send in breaking news about Hurricane Katrina, including photos and missing person information.

Al Tompkins: You have worked nearly around the clock all weekend on the site. Why is it so important?

Andy Carvin: It's so easy to forget that there are large numbers of people on the Internet with certain types of expertise that can prove to be invaluable in times of crisis. When you think of typical volunteers in an emergency, it's often people with EMS backgrounds, Red Cross volunteers and the like, but not people with technology skills. Yet many Internet-savvy people can bring things to the table, pulling together an amazing array of tools and resources that can be useful to the public in times of crisis. So I'm working with an incredible group of these online volunteers to do just that.

Your hurricane page has drawn hundreds of volunteers. What is it that you are doing that news sites are not?

Carvin: Actually, a lot of what we're doing is related to what news orgs are doing. For example, some of the earliest people I saw get on board were staff at Mississippi Public Broadcasting, who immediately began to send out local emergency alerts via Twitter. And the very first person who offered to help was John Tynan, a Web developer at KJZZ Public Radio in Phoenix. So there are definitely volunteers who are coming at this from a journalism perspective.

One challenge that news orgs often face is the ability to mobilize lots of volunteers. Even if you have a huge online development team, it can be a challenge to roll out every online service you'd like to do during an emergency. With this volunteer effort, people are coming out of the woodwork to drop everything and work on hurricane-related mashups, collect information for our wiki, develop text-messaging interfaces, etc.

Meanwhile, a lot of news sites aren't really designed for heavy public input. They may invite users to post comments, upload photos, etc, but often not much more than that. By utilizing free tools for building wikis, social networking interfaces, Twitter feeds, Google Maps, etc, we're able to mobilize folks to complete very detailed work and collaborate as equals. Over time, though, I'm hoping to see more of this happen within news sites. At NPR.org, for example, we're planning to deploy social networking tools later this fall, specifically to start building relationships with users as partners in editorial projects. So in the future, I'm hoping we'll have both the tools and the human network in place to develop these projects more directly with NPR journalists.

Your team has built a Facebook page that includes a message center. How does that work?

Carvin: Actually, the Facebook page was set up by one of our volunteers mainly as gateway to direct people to our main collaboration site, a social network located at http://gustav08.ning.com. Other Facebook groups have also popped up all over the place, and we're trying to reach out to them to make sure we're not canceling out each other's work. That's often a problem in these situations. During Katrina, for example, lots of different websites started collecting info on missing persons, but not in a coordinated fashion, so the data was really inconsistent. We eventually had to pull together a team of volunteers to sort through all the data sets and create an exchange format that would make it more useful to the Red Cross and other relief agencies. It also happens on a smaller scale - people creating competing Google Maps, for example. So much of my time has been spent just getting different independent teams of volunteers talking with each other so they can collaborate and avoid reinventing the wheel.

What is the value of a hurricane wiki?

Carvin: The wiki is intended as a reference guide to news sources, emergency services, charities and the like. There's very little editorial content there - the go is to help people find useful sources of information and send them on their way. We're still building out the wiki, though, I'm hoping that as many people are coming to contribute to the wiki as there are to browse it, so we can have it fully ready before the storm comes ashore.

The real action, though, is taking place on our social network, http://gustav08.ning.com. We have around 500 people participating there, many of whom are using the social network to direct individual projects, like the Google Map, divvying out wiki assignments, aggregating user-generated content, etc. The social network's homepage is also intended as a more dynamic version of the wiki, displaying the latest photos, alerts, news stories, tweets, Utterz audio messages, etc., in real time.

How important has Twitter been to your team?

Carvin: Twitter allowed us to launch and mobilize faster than ever before. During the tsunami and Katrina, much of what we did to pull together was word-of-mouth through email lists and blogs. With Twitter, I was able to get things started by simply telling my Twitter followers I wanted to pull together and needed volunteers. Immediately I saw my tweets being forwarded from one Twitter user to another. And some of these folks forwarding my tweets have tens of thousands of subscribers, so word spread really fast. In the two days since I started, I've used Twitter to send out more alerts, request volunteers with specific skill sets, announce new tools we've rolled out, etc. We also launched @GustavAlerts, a Twitter account that forwards National Hurricane Center alerts, and are trying to do the same for news stories and blog posts related to Gustav. In a sense, you can break it down this way: the social network is our operations center and live broadcast, the wiki is our reference desk and Twitter is our news wire service.


What could traditional news sites learn from you?

Carvin: The biggest challenge, I think, is breaking down the walls between journalists and the people formerly known as the audience. If you treat them as an audience - treat them passively - don't expect to get much more from them than letters to the editor. But the public can act as your bookers, your fixers, your librarians, your engineers and even your producers if you can give them a vision of what you want to accomplish together and the space they need to go do it. It's also important to not fear sending people away from your own website when necessary. Even as NPR builds up its internal social networking infrastructure, for example, we still plan to continue reaching out communities on Facebook, Twitter, Flickr, etc, because that's where those communities spend most of their time and are comfortable working with each other. They have unique infrastructures and dynamics that could never be fully replicated within a news org, so you need to be prepared to be working across multiple networks and connect the dots. And when a story breaks quickly and you need help, you need to act quickly, too. Use whatever tools are available to get the public involved helping you pull it all together.

What do you wish you could do but don't have the resources/volunteers/technology to do right now?

Carvin: Right now we mostly need more people - more people to research and produce different sections of the wiki, in particular. For a while we were short on Google Maps experts, but we've reached out to Google and they've helped connect us with more experts. The thing that's still missing, though, is the perfect interface for coordinating all of this activity. For 9/11, we used listservs; for tsunami, it was blogs and aggregators, and then for Katrina, there was all of that, plus wikis and a lot of user-generated content. Now we've added social networks and Google Maps to the mix. But we still need a better system for coordination, so people don't duplicate efforts, or worse, cancel each other out. Frankly, the tools may be just fine, but it's our method of interaction that needs improvement. For one thing, I'm already regretting not having a more disciplined system for passing off assignments to keep things rolling 24/7, and we could have done a better job at organizing assignment boards and identifying team members. Other things I wish we could have done more easily were SMS relays so people could send and receive text messages without having to rely on Twitter, since not everyone has Internet access and Twitter limits the numbers of texts in a given week. Better SMS relay networks is something we've talked about since the tsunami but still haven't mastered yet. And that's just off the top of my head - I'm sure my volunteers could add hundreds of other things to the list. :-)

What happens to the site once the storm passes?

Carvin:After the Tsunami and Katrina, we kept the projects rolling for a while as long as there was news to share, particularly in terms of charitable opportunities. And given the fact that Hannah is heading to the East Coast, it's quite possible we'll have to switch gears to that storm. But once everything quiets down, I'd love to see someone come in independently and analyze everything we did, and help create a template for us to make it easier to mobilize the next time around. I'm very fortunate to have volunteer veterans from previous disasters taking the lead on this project, and it gets a little easier each time. But the tools keep evolving, too, so we need to be nimble enough to integrate the next Twitter, Qik or Ning that comes around during the next disaster. But there's a lot of work to be done to have better systems in place that make it easier for everyone to mobilize at the drop of a hat and coordinate with news orgs and emergency services agencies. No one ever said this would be easy. :-)

More from Andy Carvin: In 2008, he helped launched Get My Vote (), which Carvin says, "invites the public to create audio, video or text political commentaries about what motivates them to support specific candidates." Read his personal Web site here. This is PBS' Learning Now site, which "is a weblog that explores how new technology and Internet culture affect how educators teach and children learn."

Friday, August 29, 2008

Is "9-1-1" call taking? Or a response system?

A recent exchange on the 9-1-1 list serv was interesting. Some were complaining that the public and press were blaming 9-1-1 organizations for failures in other parts of the emergency response chain. They argued that people should understand that 9-1-1 simply answers the public's calls and connects with the right response agency (police, fire or EMS) -- the actions of which are beyond the control of the 9-1-1 center.

Others noted repeated instances (also my personal experience) when members of the public thought that 9-1-1 was the whole response system. In fact, the public thinks "9-1-1" is the full emergency response system -- functioning as an integrated whole to respond to their emergencies.

I think the public is right (to want it to be that way) -- and far ahead of most in the emergency response organizations in thinking about emergency services in a modern way. When they call 9-1-1, the public is expecting an end to end service, not a set of stovepipes that talk to each other.

We in emergency response don't tend to think of ourselves as a unified whole -- but instead as a set of distinct stovepipes: 9-1-1, EMS, fire, police, emergency rooms, public health, emergency managers, traffic managers, hospitals, trauma centers, doctors offices, urgent care, mental health, poison control. As a result, we have systems that can't communicate, we incur significant expenses in duplicative systems and processes, we can't measure end to end outcomes. We optimize within each stove pipe, which is exactly the wrong way to optimize end to end.

It doesn't help a heart attack victim to have 9-1-1 and EMS do their jobs perfectly, and then sit in an ED repeating all the same information and waiting for a doctor to process them on (instead of skipping the ED entirely and going directly to the cath lab, saving 30 minutes, as they are doing in Seattle now because of integrated information systems).

David Aylward

Our balkanized organization is a result of history, and it has worked pretty well, but that doesn't mean we should not listen to the public and change how we think about, plan and deliver emergency service for the future.

Tuesday, March 25, 2008

Thoughts on the D Block/Public Safety Broadband Network Auction: Policy Right; Business Wrong

By David Aylward, COMCARE Director

It is really a shame the FCC’s D Block/Public Safety auction didn’t work. But even in the failure, we should be delighted at the enormous policy progress it represents. Let’s hope the parties involved get the business side correct on the second round.

I taught a law school seminar recently, using this issue as an example of how change in communications policy occurs. My focus was on the extraordinary revolution in spectrum policy and emergency communications that Morgan O’Brien has brought about with his Cyren Call plan. I told the students Marx would be disappointed because change here was so clearly the work of a handful of individuals, not inexorable economic forces. When law enforcement leader Harlin McEwen recruited safety leaders to support O’Brien’s plan, the FCC adopted and applied many of its key principles to the pre-existing spectrum allocation. In doing so, the Commission basically followed the subsequent plan proposed and lobbied hard by Reed Hundt’s now-defunct Frontline. (There in two sentences is a year’s worth of intensive lobbying by scores of parties!) Morgan and Harlin touched off tectonic shifts in spectrum licensing policy and emergency communications architecture, and the FCC sought to implement them.

In a single year there was a huge policy break made from our emergency agencies’ current balkanized, compartmentalized and non-standardized communications to a modern, national Internet Protocol-based approach. From local everything, look at what happened. Thanks to these folks’ leadership, we have gone from local to national license, from local to national network, from self-owned to managed services, from narrow (and “wideband”) to IP broadband, from siloed access control and identity management to shared core services, from separate systems to sharing commercial spectrum and networks, and from separate technology to sharing in the benefits of commercial R&D. These are all extraordinary and positive developments, whatever happens next in the auction, and will help move emergency communications into the 21st century.

Where this approach is running into trouble is in the business issues of who does what (network details, service offerings) and who gets to make money – the issues at which regulatory lawyers in and out of government are generally awful. The key to this and any successful public/private deal is a marketplace test: can you finance it, and then make the numbers work over time? Notwithstanding its brilliance and innovation in inception, and an enormous amount of work in good faith by many parties in a short period of time, this plan didn’t even get out of the starting gate. It is a total failure on a business basis. Not even close.

The market enthusiastically embraced the other, adjoining beachfront spectrum and threw billions of dollars at it, substantially more than expected. But this adjoining spectrum was spurned, even with a floor price much, much lower than a comparable amount of spectrum (which lacked the advantage of access to the additional public safety spectrum and safety customers). How did the balance here between public and private interests get miscalculated so badly?

First, let’s not blame the entrepreneur who got this ball rolling. Morgan O’Brien never made any secret of his desire to run the public safety network as a business (versus acting as some sort of expert consultant to safety leaders). Congress would not change the auction rules as he proposed and hand over a larger block of 700 MHz spectrum in the form of a public safety license. At that point, he could have tried to bid on the D block in competition with Frontline and others, but his much smarter approach was to get the Public Safety Spectrum Trust (PSST) to become (or hire) a network operator – and to quickly give him that role.

Folks are focused on the $500 million payment from D Block bidders that Morgan allegedly asked for. To me that is frosting, and it creates an aura of back room deals when the important issues here were and are not hidden at all. The key document in this grand plan of a private sector/public safety broadband network is the Bidder Information Document (BID) for D Block bidders prepared by Cyren Call on behalf of PSST that was made public last November and is on PSST’s website (and includes at least one clear reference to the successful bidder paying up front for access to the safety spectrum). BID told potential D Block bidders what they would need to know and do to meet the Commission’s public safety obligation for the winning bidder. I encourage you to take 30 minutes and read it. What it describes is about as good as it could be from two perspectives: (1) the power of the PSST to have built and supervise a new, very high quality broadband network for public safety according to its desires and design, and (2) the ability of Cyren Call to run a very serious, large national business on behalf of all public safety clients -- effectively an MVNO for safety, but with some real power over the underlying provider. The architecture diagrams and explanations contained in this document describe in detail the extensive network operations and services by Cyren Call, including services and billing. This BID document says PSST/Cyren Call will “own the customer”. In looking at the income side of the ledger, therefore, a bidder would see Cyren Call/PSST taking the public safety business and paying some level of wholesale rate to the underlying carrier.

Getting to design a network that meets their needs, without serious regard at this point to cost, is attractive to emergency responders. The bidding document describes requirements that are often substantially better than many if not most public safety networks and certainly better than most commercial networks today, e.g. build-out requirements, encryption, back-up power and geographic/in-building coverage. Just look at the huge fight over power back up requirements the FCC is now trying to impose on the wireless industry. The bidding document insists on a strong (and thus expensive) solution there, as it does in just about every area.

There is nothing inherently wrong with any of this. Indeed, from the perspective of the emergency responder interests, there is a lot to be said for it, although our experience (and the public’s) with single providers hasn’t always been a happy one.

But power over network design and operations is just one value, one part of the overall equation, for emergency agency constituents/consumers. They will be paying customers of this network if and when it gets built. So they have two very different other interests than power/authority: the first is getting someone to build it (i.e. getting a successful D Block winner that builds out the new broadband network fast), and then being able to buy service on the network at reasonable prices. For these purposes, public safety users’ interests ultimately may be quite different than the way they are being defined today in the start up, design process.

For those who care about this issue it is worth rapidly perusing the formal document circulated by the PSST to potential bidders for the D Block spectrum in November, before the current spectrum auction started. It describes in great detail what the commercial winner of the D Block auction would need to do with the adjoining public safety spectrum, and the relationship it would have with the PSST. As you read it, put yourself in the role of an investment bank deciding whether to finance Frontline, or Warren Buffett spending his own money. http://www.psst.org/documents/BID2_0.pdf

It’s a bit too easy to criticize public safety leaders for designing a “platinum network”. What do we expect them to do? Negotiate with themselves? After all, folks in our world are focused on safety, not business and finance. How could a first set of requirements developed by and for national safety leaders be anything other than an ideal network? (Although it is worth noting that they entirely missed the value of using the backbone network these wireless services will need for the inter-organizational emergency backbone many of us have been advocating as well). And why should they not support the expansive role Morgan wanted for Cyren Call? His hard work got them to the table in the first place. The problem here appears that there was no serious counterweight representing the other half of the partnership (those with experience and interest in building large wireless networks), and the FCC apparently made no effective effort to solve that problem before the auction.

The winner of the D Block auction would have to compete with commercial companies who won other parts of the spectrum, and incumbents with current spectrum. Presumably s/he would compensate for the increased expenses of the network to meet special public safety needs by charging them more, and by offering specialized applications (e.g. radio over internet protocol interoperability services) to them. Certainly, there should also be some attractiveness in the large, stable, recession-proof audience of public safety clients. But it looks like much of the upside was going to Cyren Call/PSST, while the costs of delivering the underlying network were clearly both high and highly uncertain for potential D Block bidders.

One could say, “Well, that was all negotiable after the auction”, but no one in their right mind can bid into that uncertainty at every level, even in the best of credit conditions, particularly with a very large auction down payment at risk if the FCC later decided the winner had not bargained in “good faith”.

I have raised money for start up communications companies, of my own and of others. Reading this Bidder Information Document, it is inconceivable to me how anyone could either bid, or raise the money to bid. But don’t fault Morgan O’Brien for trying to maximize the business he would get to run. Let’s thank him for starting a revolution, significantly advancing how regulators and traditional public safety groups think about meeting safety communications needs. And don’t blame Harlin McEwen for rushing to hire his ally and trying to design the best possible network for safety uses. The question is where was the FCC in supervising this to find the balance needed to make a public/private deal financeable, and viable long term?

Let’s hope the participants get it right the second time around.

David Aylward is a founder and Director of COMCARE Emergency Response Alliance (www.comcare.org), President of National Strategies, Inc., and former Chief Counsel and Staff Director of the US House of Representatives’ Subcommittee on Telecommunications, Consumer Protection and Finance. He also serves as a Director of the E9-1-1 Institute and is Vice Chair of the Network Centric Operations Industry Consortium’s Technical Committee on Net Enabled Emergency Response. The thoughts expressed here are his own.