Vicky Steeves' presentation from the panel 'Open Science: Understanding Modern Research Practices' presented at ACRL 2017 in Balitmore, MD with Robin Champieux, Jeff Leek, Brett Davidson (organizer), Eka Grguric (organizer).
What's in the LIS Scholarship Archive
This Note argues that simply grabbing pre-packaged analysis off the ESRB’s shelf and running it through legislative machinery is not a recipe for good, or even constitutionally acceptable, law. This legislation, which curtails expression protected by the First Amendment, cannot rest on such stale standards. Additionally, this Note aims to suggest the importance of a serious and respectful application of First Amendment jurisprudence for video games. Although a recent phenomenon, video games comprise a medium that is increasingly popular and has already begun to demonstrate the ability for expression as sophisticated and powerful as any other speaker in the modern marketplace of ideas. Any legislation governing the medium should be grounded in fresh First Amendment analysis. But the legal percolation of the “Hot Coffee” controversy should also serve as a wake up call for the expressive possibilities of the medium as a whole.
The investigators designed and distributed a survey to analyze academic health sciences libraries’ use of the popular social network, Facebook.
Observations at an indoor rock climbing gym reveal Elfreda Chatman’s use of the small world concept in action. The behaviors of the “legitimized others,” (Chatman, 1999) or what I will call the core community of climbers can be analyzed through the four concepts of Chatman’s Theory of Normative Behavior: Social Norms, Worldview, Social Types, and Information Behavior (Burnett, Besant, & Chatman, 2001). I will discuss each concept in turn, and model a climber’s information behavior as it pertains to climbing a selected route. I will also be drawing on sociological research into subculture, which shares many of the characteristics of Chatman’s small world.
Objective: The objective of this study was toanalyze bibliometric data from ISI, NationalInstitutes of Health (NIH)–funding data, andfaculty size information for Association of AmericanMedical Colleges (AAMC) member schools during1997 to 2007 to assess research productivity andimpact.Methods: This study gathered and synthesized 10metrics for almost all AAMC medical schools(n5123): (1) total number of published articles permedical school, (2) total number of citations topublished articles per medical school, (3) averagenumber of citations per article, (4) institutional impactindices, (5) institutional percentages of articles withzero citations, (6) annual average number of facultyper medical school, (7) total amount of NIH fundingper medical school, (8) average amount of NIH grantmoney awarded per faculty member, (9) averagenumber of articles per faculty member, and (10)average number of citations per faculty member.Using principal components analysis, the authorcalculated the relationships between measures, if theyexisted.Results: Principal components analysis revealed 3major clusters of the variables that accounted for 91%of the total variance: (1) institutional researchproductivity, (2) research influence or impact, and (3)individual faculty research productivity. Dependingon the variables in each cluster, medical schoolresearch may be appropriately evaluated in a morenuanced way. Significant correlations exist betweenextracted factors, indicating an interrelatedness of allvariables. Total NIH funding may relate morestrongly to the quality of the research than thequantity of the research. The elimination of medicalschools with outliers in 1 or more indicators (n520)altered the analysis considerably.Conclusions: Though popular, ordinal rankings cannotadequately describe the multidimensional nature of amedical school’s research productivity and impact. Thisstudy provides statistics that can be used in conjunctionwith other sound methodologies to provide a moreauthentic view of a medical school’s research. The largevariance of the collected data suggests that refiningbibliometric data by discipline, peer groups, or journalinformation may provide a more precise assessment.
Using Institute for Scientific Information (ISI) data, this paper calculated institutional self citations rates (ISCRs) for 96 of the top research universities in the United States from 2005-2007. Exhibiting similar temporal patterns of author and journal self-citations, the ISCR was 29% in the first year post-publication, and decreased significantly in the second year post-publication (19%). Modeling the data via power laws revealed total publications and citations did not correlate withthe ISCR, but did correlate highly with ISCs. California Institute of Technology exhibited the highest ISCR at 31%. Academic and cultural factors are discussed in relation to ISCRs.
Achieving research reproducibility is challenging in many ways: there are social and cultural obstacles as well as a constantly changing technical landscape that makes replicating and reproducing research difficult. Users face challenges in reproducing research across different operating systems, in using different versions of software across long projects and among collaborations, and in using publicly available work. The dependencies required to reproduce the computational environments in which research happens can be exceptionally hard to track – in many cases, these dependencies are hidden or nested too deeply to discover, and thus impossible to install on a new machine, which means adoption remains low. In this paper, we present ReproZip, an open source tool to help overcome the technical difficulties involved in preserving and replicating research, applications, databases, software, and more. We examine the current use cases of ReproZip, ranging from digital humanities to machine learning. We also explore potential library use cases for ReproZip, particularly in digital libraries and archives, liaison librarianship, and other library services. We believe that libraries and archives can leverage ReproZip to deliver more robust reproducibility services, repository services, as well as enhanced discoverability and preservation of research materials, applications, software, and computational environments.
Open Access (OA) publications allow for anyone to access research information free of charge. It is difficult for researchers to discover which OA publications exist, and the price to publish. We designed and implemented a data-driven web app and API enabling researchers to discover relevant and reputable OA publications to maximize publishing impact. We aggregated price information and journal impact data. Our goal is to provide the OA community with the tools they need to separate legitimate OA publications from unethical publishers. We believe transparency in the market will produce downward price pressure, further lowering economic barriers to publishing.
Digital scholarship is an evolving area of librarianship. In this piece we propose 10 theses, statements about what this kind of work DOES, rather than trying to define with it IS. We believe that digitally-inflected research and learning, and the characteristics they employ, are essential to the recentering of our profession's position in/across the academy. We also believe that the "digital scholarship center" has served its time, and that the activities and models for digital scholarship work are core to librarianship. This manifesto is meant to serve as a starting point for a necessary discussion, not an end-all, be-all. We hope others will write and share counter-manifestos, passionate responses, or affirming statements.
This study analyzed 2005–2006 Web of Science bibliometric data from institutions belonging to the Association of Research Libraries (ARL) and corresponding ARL statistics to find any associations between indicators from the two data sets. Principal components analysis on 36 variables from 103 universities revealed obvious associations between size-dependent variables, such as institution size, gross totals of library measures, and gross totals of articles and citations. However, size-independent library measures did not associate positively or negatively with any bibliometric indicator. More quantitative research must be done to authentically assess academic libraries’ influence on research outcomes.