Mayor Emanuel has moved quickly to have city agencies assemble and distribute data from their operations. This data will also be used internally for data analysis to improve city decision-making and management. To carry out these missions, he has appointed a Chief Data Officer who will be working with an Open Data Advisory Group made up of data coordinators from each City agency.
The agencies will be expected to publish public data sets on a regular basis. In addition to providing information regarding the functioning of the city executive, it is expected that the released data will serve as a platform for the creation of innovative tools that will improve the lives of all residents.
To improve City operations, the Mayor has launched a citywide data collection project that is being funded by a grant from the Chicago-based , MacArthur Foundation. It will consolidate the data into a singular data platform for analysis that can inform decision-making and improve operations.
In a December 14, press release the Mayor announced the release of an unprecedented amount of data on the City's procurement process. This will include "...posting all winning and losing bids and proposals submitted by vendors online anytime a contract is awarded, including all line items of competitive bids. DPS will start posting these documents online during the first quarter of 2013..."
Two other initiatives will go beyond just the City of Chicago. "MetroChicagoData.org" will combine public data from Chicago and multiple local governments and the State into a single portal. Moving beyond the local geography, the City is partnering with New York, San Francisco , Seattle and the federal government on an initiative called "Cities.Data.Gov". This portal is designed to help city officials and developers, working together, to improve the information available to their residents.
The efforts on all these extensive data collection and distribution initiatives, does lend much credibility to the Emanuel administration's commitment to fostering real transparency and accountability.
In September, the Chicago Department of Information Technology partnered with Code for America to announce the development of Chicago’s new “Open311” service two-way service request system. The system also allows residents to submit photos with service requests, encouraging more accurate and detailed reporting of issues to City departments.
A December 17th press release, indicates the system is now up and running. Formerly, the city could only do outgoing text messages to city residents. The new 311 platform allows citizens to text service request to the city as well. This new channel also makes it possible for citizens without smart phones or computers to place such requests. The system also provides for tracking the progress of the request, and signing-up to receive an email when the issue is resolved.
The "311 platform" allows for third parties to build applications on the system for creating even more ways for the public to interact with the City. Several such apps have already been created.
The National Academy of Sciences has defined scientific literacy as ‘‘the knowledge and understanding of scientific concepts and processes required for personal decision-making, participation in civic and personal affairs, and economic productivity.’’
How scientifically literate we are in the United States is really open to question. A recent poll conducted by the Pew Research Center for the People and the Press (2009), finds that only 28% of American adults currently qualify as scientifically literate. Also, most scientists (85%) also think that one of the major problems for science is that the public does not know very much about science.
It is in this social context that the multi-organizational web site, Science .gov, was launched in 2002. Given the statistics mentioned above, the need for such a site that attempts to provide accessible and understandable information about science and its application in our lives, is sorely needed.
The website is the result of the collaborative effort of 17 U.S. government science organizations within 13 Federal Agencies. These agencies form the voluntary Science.gov Alliance which governs Science.gov. "Currently in its fifth generation, Science.gov provides a search of over 55 scientific databases and 200 million pages of science information with just one query, and is a gateway to over 2100 scientific Websites."
October 2012 marked a major update to Science.gov , in honor of its 10th Anniversary. The current iteration of the web site provides many advanced features and capabilities :
• Accessing over 55 databases and 200 million pages of science information via one query
• Clustering of results by subtopics, authors, or dates to help you target your search
• Wikipedia results related to your search terms
• Eureka News results related to your search terms
• Mark & send option for emailing results to friends and colleagues
• Download capabilities in RIS
• Enhanced information related to your real-time search
• Aggregated Science News Feed, also available on Twitter
• Updated Alerts service
• Image Search
In the coming years, scientific innovation will have an ever increasing role in our daily lives and the global economy. Successfully dealing with emerging technologies and their attendant ethical questions, will require a more scientifically aware and literate citizenry. A broad-based educational program like Science .gov, is providing a key component in this effort.
Congratulations to the Science.gov Alliance for ten years of exemplary service to the public !
On Nov. 16, in a federal court settlement, in the case of United States v. Google Inc., Google was charged $22.5 million to settle Federal Trade Commission ( FTC ) commission claims that it had overridden user privacy settings on the Safari web browser to secretly track users. This case is emblematic of the marketing behavior of many commercial interests that seek to gather as much information about potential customers, into detailed profiles, to help sell products and to sell that information, as a product itself.
A March 2012, FTC report, "Protecting Consumer Privacy in an Era of Rapid Change", states that "In today’s world of smart phones, smart grids, and smart cars, companies are collecting, storing, and sharing more information about consumers than ever before. Although companies use this information to innovate and deliver better products and services to consumers, they should not do so at the expense of consumer privacy." The report expands on privacy principles previously enunciated back in 2010. These include:
• Privacy by Design: Build in privacy at every stage of product development;
• Give consumers the ability to make decisions about their data at a relevant time and context;
• Make information collection and use practices transparent.
Earlier this year the Obama administration proposed a Consumer Privacy Bill of Rights, calling on Congress to pass legislation that applies the Consumer Privacy Bill of Rights to commercial sectors that are not subject to existing Federal data privacy laws. In the report, "Consumer Data Privacy in a Networked World", the President states that, "Neither consumers nor companies have a clear set of ground rules to apply in the commercial arena. As a result, it is difficult today for consumers to assess whether a company’s privacy practices warrant their trust."
The very same observation could be applied to the detailed personal information-gathering of various levels of government as exemplified in the functioning of the governmental "Fusion Centers". These centers are entities that integrate the gathering, storage, sharing, and analysis of national security intelligence. Fusion centers are owned and operated by state and local entities with support from federal partners.
In an article in the "The Brief" ( v. 70 nos. 3-4, Fall 2012 ) the ACLU of Illinois, comments on their new report on ,"Fusion Centers in Illinois". "Fusion Centers use a multiple of private and public resources to gather quantities of personal information about many people"
"The State Police fusion center has access to the databases owned and operated by private data mining corporations. These include Dun and Bradstreet, Experian, ISO ClaimSearch, and Lexis-Nexis (including its subsidiaries Accurint and ChoicePoint). These private databases provide non-criminal information about millions of innocent persons, including their identifying information, employment and medical history, property, licenses, credit, and neighbors and relatives."
This is where there is an actual coming together of the commercially gathered personal information and that which the government gathers on its own. While the government may be proscribed from gathering some of this information. They can just buy it from the commercial sector that has no such restrictions.
The FTC, the President and Congress appear to be aware of the need for rules on how commercial entities gather and use our personal information. However, since most of the work of the Fusion Centers is done in secret, they are in serious need of recognized privacy policies for how they carry out their work. The ACLU of Illinois recommends that :
• Fusion centers should be barred from gathering, storing, or sharing any information about any individual person absent individualized reasonable suspicion that the person is engaged in criminal activity.
• Fusion centers should be barred from gathering, storing, or sharing any information about any individual person due, in whole or in part, to that individual’s race, religion, political beliefs.
• Fusion centers should independently determine the accuracy of information, before they store and share that information.
• People should be able to find out whether a fusion center is storing information about them and to learn the substance of that information.
If commercial data gathering is perceived as endangering our privacy, then the vast new power of government fusion centers, also creates new threats that need to be addressed.
Back in Nov. 2008 , the organization OMB Watch called for an online FOIA (Freedom of Information Act ) web site that would allow submission of requests, track the progress of the response by agencies and be able to search a collection of stored released responses.
Traditionally, submission and responses to FOIA requests from citizens and companies, had been conducted by submitting a written request to each agency and waiting to hear back whether the request would be fulfilled. If more than one agency had to review the proposed release, it could take a long time before even that happened. Also, there was no single place to search for previously released FOIA responses.
Last month, the OMB Watch recommendation came to fruition in the form of the FOIA Online, which was originally developed by the Environmental Protection Agency ( EPA), in partnership with the Department of Commerce and the National Archives and Records Administration, as the “FOIA Module”, which was designed to deal with these shortcomings and provide significant new functionality for the producing agencies and the requesting end users.
Here are two slides from a Commerce Dept. Training document that concisely conveys what functions the new module provides for both :
At present FOIA Online provides processing tool for these agencies :
Users who wish to submit a request are encouraged to create an account that will allow them to :
Hopefully, additional agencies will become participants in FOIA Online which is based on the “Shared Service” concept that helps eliminate waste and duplication and improve the effectiveness of technology solutions for government agencies and the public they are to serve.
Matt Honan, a senior writer at Wired magazine, has written two recent articles, on how several of his online accounts were hacked and taken over with the loss of much personal data . As someone who is an advance user of consumer technology and a keen observer of online developments, his insights and warnings should carry much weight for the rest of us.
He discovered how much of the hacking was carried out from actually speaking with one of the perpetrators. Apparently, the door was opened for a series of intrusions, by what is term "social engineering". Meaning that an interaction with a real person was used to deceptively acquire the needed private information. However, the devastating extent of the hack was made possible by the his various accounts being linked. This facilitated the use of data discovered in one account, to be used with that found in another, to further compromise other linked accounts.
In the second article Mr. Honan identifies the inherent weaknesses in the widespread use of online passwords, as the central and fatal weakness in our online authentication infrastructure. He says that passwords, even very complex ones, can no longer protect users online.
He mentions the reoccurring losses and theft of online user ID and password files from organizations. Also the near universal use of an email address as a user name, makes the fraudulent use of the password re-set procedure, another easy route to compromising a personal account.
Planting malware on our computers can also send our data to other people. As more of our applications are moved into the Cloud of the internet, many more of our important transactions like banking, emailing, storing photos and documents, become even more vulnerable to hackers and thieves.
Mr. Honan says that what happened to him can happen to any one of us. And may become more frequent and widespread, if users and providers don't start moving away from the over-reliance on passwords and even backup questions, to provide security for our online activities.
But newer more effective techniques may ironically require us to surrender even more personal information as we move to behavior- based identification and authentication. Which will monitor patterns for anomalies that flag potential dangers. It may mean a substantial trade off of privacy and convenience for better security. It seems like we don't win in this scenario but at least we may not lose as big, as Mr. Honan did.
In the last decade the development of competence in the use of continually evolving consumer technologies has been widely recognized as a "life skill" similar to traditional literacy and numeracy. A 2012 report from the Institute for Prospective Technological Studies of the European Union, "Digital Competence in Practice: An Analysis of Frameworks" , explores how to define and think about this new basic life skill. This is a necessary first step to be able to develop meaningful learning objectives.
The author points out that a literature review of this area, frequently becomes " a jargon jungle" filled with terms such as : Digital Literacy, Digital Competence, eLiteracy, e-Skills, eCompetence, 'technology literacy', 'new literacies', 'multimodality', media and information literacy.
While each of these may help explain aspects of the phenomenon, they may be too narrow in their conceptualization because they are still relying on the analogy to the decoding and encoding associated with traditional reading and writing. The author says that Digital Competence is a "multi-faceted moving target" that takes into account some key features of the new digital arena.
"The new, added dimension that is acquired though the digital is that the decoding and encoding units of meaning is made of a mixture of letters, sounds, videos, images that are organised in a not necessarily linear way."
Some of these features include the reader becoming an author in using hypertexts and multi-modal texts.They create a new reading experience and "text", each time they make choices of what hyperlinks to follow or not. A document is not read linearly from beginning to end. But becomes whatever the user decides it to be, by his or her choices in interacting with the content.
There is also the more direct function of end user authorship provided by blogs, listservs, Facebook postings, email. Sophisticated users can also become contributors to new software by creating applications for the existing digital platforms.
In attempting to define a more inclusive idea of Digital Competence, the author provides an example for the library world. She quotes The American Library Association's 1989 definition of information literacy as "the ability to recognise when information is needed and the ability to locate, evaluate, and use the needed information effectively".
The author see this pointing to her own definition of Digital Competence, as the set of knowledge, skills and attitudes needed today to be functional in a digital environment. She thinks that attitudes are a necessary component of the definition that has been usually neglected. The attitudes provide a pre-condition to people even considering seeking relevant knowledge and developing new skills.
Acquiring Digital Competence happens in social practice that provides "... a specific way to act and interact with technologies (and therefore it requires specific attitudes), of understanding them (and therefore holding specific knowledge), of being able to use them (and therefore having specific skills)."
The article's analysis provides a coherent, widely applicable way of understading and eventually shaping how citizens can learn and adapt to the wave of consumer technology that is sweeping over them and not be left behind.
CARLI is the state-wide library consortium that provides maintenance and development of its integrated library management system, and the provision of meaningful electronic resources for 154 member libraries in Illinois. It recently announced the availability of over a million bibliographic records for the massive holdings of digitized content, controlled by an organization called the Hathi Trust (Hathi (pronounced hah-tee) is the Hindi word for elephant, an animal noted for its memory, wisdom, and strength—that’s the origin of the elephant on their logo. ) [ Announced in April 9, 2012 email ]
How did this this interestingly named organization come about ? The Hathi Trust is basically a huge digital library whose initial content came form the mass Google digitization of the print collections of thirteen universities of the Committee on Institutional Cooperation, the University of California system, and the University of Virginia.
At present it has the membership of 60+ libraries and 3 consortia. More recently the Trust has also added digitized book and journal content from the Internet Archive, as well as other local content digitized by partner institutions. The Trust is also exploring the possibility of also providing digital audio and image files (such as maps).
The basic aims of this organization are to preserve this content in perpetuity, to index and provide meta-data for it's holdings and to provide for access to it's collections for it's partner libraries and beyond. The availability of the records through CARLI is one example. However, the level of access does vary for different users.
Users affiliated with HathiTrust partner institutions are able to download full-PDFs of all public domain works, and works made available under Creative Commons licenses. These are the libraries formally affiliated with the Hathi Trust. For users in libraries not so affiliated, the degree of access is rather limited. Most, if not all, the original Google scanned materials are controlled by a third party agreement. Libraries' agreements with Google require Hathi Trust to take steps to prevent bulk download of materials they have digitized.
So for DePaul users and many other non-partner libraries, the access and download option, is one page at a time ! ( with the exception of materials that are not subject to the contract with Google or were provided under a Creative Commons Licence). Since huge collections of federal government documents were part of the mass digitizations projects undertaken by GOOGLE, these public domain materials, also fall under these restrictions.
GOOGLE claims that "Google’s mission is to organize the world’s information and make it universally accessible and useful.". It seems more than a bit incongruous that it would enforce an agreement that makes scanned public domain materials, only available in a manner that effectively limits useful access. Unlike commercialy published works with copyright claims by authors and publishers, federal government information is generally in the public domain and should be provided without the encumbrances placed on commercially published works.
The availability of the Hathi Trust records through our own local library catalog, is a very useful tool for discoverying content that has been housed in some of our largest research libraries. But being able to conveniently read, copy and download public domain materials for research purposes, still eludes those users and libraries that are not in the club.
Established in 2009, Law School Transparency ( LST ) is a non-profit organization seeking to to improve consumer information concerning the value of legal education and to usher in consumer-oriented reforms to the current law school model. They were formed in the context of the general economic decline that has impacted and changed the market for legal services. This has led to a growing number of law graduates with huge loans and under-employment or unemployment.
However beyond the economic conditions, many of these grads are questioning the accuracy and completeness of the information they received as prospective law students, which affected their willingness to attend law school, in the first place. LST indicates that the reporting standards themselves were inadequate for informed decision-making by prospective students. Also, they report that most law schools were initially not willing to provide the data they had, that could better inform student choices.
Most recently, LST has developed an online clearinghouse of school and employment data that attempts to provide relevant information for each ABA accredited law school. They collected and reconciled data from four sources: the ABA, U.S. News, school websites, and school NALP ( National Association for Law Placement ) reports. Only about 1 in 4 schools has chosen to make their NALP reports public, so far. The job of reconciling the data from theses multiple sources, to provide meaningful comparisons, was challenging. They say they tried to "reconciled incompatible data as fairly as possible."
"For each ABA-accredited law school, the database includes key employment statistics; charts that break down the percentage of graduates in lawyer and non-lawyer jobs; graphs that detail whether jobs were long-term or short-term; maps showing the states in which the largest percentage of graduates
found jobs; salary breakdowns; and the jobs reports that schools submitted to the ABA and NALP." For law school where it is available the Raw Data used for creating the graphs, is also presented.
In addition to this basic information, the web page for each law school, provides an "under-employment score"; the percentage of students who report their salaries ( important for validity of stats) ; and tuition figures including a total debt projection for the classes of 2015 and 2016.
The continued development of this first-of-a-kind database and the inclusion of even more previously non-public data, should advance "... the simple notion that schools need to be disclose employment data to fulfill their obligations to fairly and adequately inform prospective students about the nature of the legal profession and opportunities within it."