Polymer conjugation increases an enzyme's circulation time and stability for use as a therapeutic agent, but this attachment indubitably affects its properties. Covalent attachment of multiple polyethylene glycol chains with sizes of either 2, 5, 10, or 20 kDa increases the molecular weight and hydrodynamic radius of the model enzyme trypsin. The sizes of these polymer-enzyme conjugates are increased to be within the recommended limits for PDEPT applications. The T(d) increases from 49 to 60 °C to expand the enzyme's workable range of conditions. This functionalization with PEG polymers of varying lengths maintains trypsin's enzymatic activity. Conjugate activities are 79-120% that of native trypsin at room temperature and 221-432% that of trypsin at 37 °C.
In this paper, rather than focusing on genes as an organising concept around which historical considerations of theory and practice in genetics are elucidated, we place genetic markers at the heart of our analysis. This reflects their central role in the subject of our account, livestock genetics concerning the domesticated pig, Sus scrofa. We define a genetic marker as a (usually material) element existing in different forms in the genome, that can be identified and mapped using a variety (and often combination) of quantitative, classical and molecular genetic techniques. The conjugation of pig genome researchers around the common object of the marker from the early-1990s allowed the distinctive theories and approaches of quantitative and molecular genetics concerning the size and distribution of gene effects to align (but never fully integrate) in projects to populate genome maps. Critical to this was the nature of markers as ontologically inert, internally heterogeneous and relational. Though genes as an organising and categorising principle remained important, the particular concatenation of limitations, opportunities, and intended research goals of the pig genetics community, meant that a progressively stronger focus on the identification and mapping of markers rather than genes per se became a hallmark of the community. We therefore detail a different way of doing genetics to more gene-centred accounts. By doing so, we reveal the presence of practices, concepts and communities that would otherwise be hidden.
DNA sequencing has been characterised by scholars and life scientists as an example of 'big', 'fast' and 'automated' science in biology. This paper argues, however, that these characterisations are a product of a particular interpretation of what sequencing is, what I call 'thin sequencing'. The 'thin sequencing' perspective focuses on the determination of the order of bases in a particular stretch of DNA. Based upon my research on the pig genome mapping and sequencing projects, I provide an alternative 'thick sequencing' perspective, which also includes a number of practices that enable the sequence to travel across and be used in wider communities. If we take sequencing in the thin manner to be an event demarcated by the determination of sequences in automated sequencing machines and computers, this has consequences for the historical analysis of sequencing projects, as it focuses attention on those parts of the work of sequencing that are more centralised, fast (and accelerating) and automated. I argue instead that sequencing can be interpreted as a more openended process including activities such as the generation of a minimum tile path or annotation, and detail the historiographical and philosophical consequences of this move. Highlights:-DNA sequencing is primarily understood by a 'thin sequencing' perspective.-I propose a 'thick sequencing' perspective.-Thick sequencing includes different stages of assembly, evaluation and annotation.-An alternative picture of the nature and organisation of sequencing is presented.
The history of genomic research on the pig (Sus scrofa)—as uncovered through archival research, oral histories, and the analysis of a quantitative dataset and co-authorship network—demonstrates the importance of two distinct genealogies. These consist of research programs focused on agriculturally oriented genetics, on the one hand, and systematics research concerned with evolution and diversity, on the other. The relative weight of these two modes of research shifted following the production of a reference genome for the species from 2006 to 2011. Before this inflection point, the research captured in our networks mainly involved intensive sequencing that concentrated primarily on increasing the resolution of genomic data both in particular regions and more widely across the genome. Sequencing practices later became more extensive, with greater focus on the generation and comparison of sequence data across and between populations. We explain these shifts in research modes as a function of the availability, circulation, distribution, and exchange of genomic tools and resources—including data and materials—concerning the pig in general, and increasingly for particular populations. Consequently, we describe the history of pig genomics as constituting a kind of bricolage, in which geneticists cobbled together resources to which they had access—often ones produced by them for other purposes—in pursuit of their research aims. The concept of bricolage adds to the thicker vision of genomics that we have shown throughout the special issue and further highlights the singularity of the dominant, thin narrative focused on the production of the human reference sequence at large-scale genome centers. This essay is part of a special issue entitled The Sequences and the Sequencers: A New Approach to Investigating the Emergence of Yeast, Human, and Pig Genomics, edited by Michael García-Sancho and James Lowe.
From the 1980s onwards, the Roslin Institute and its predecessor organizations faced budget cuts, organizational upheaval and considerable insecurity. Over the next few decades, it was transformed by the introduction of molecular biology and transgenic research, but remained a hub of animal geneticists conducting research aimed at the livestock-breeding industry. This paper explores how these animal geneticists embraced genomics in response to the many-faceted precarity that the Roslin Institute faced, establishing it as a global centre for pig genomics research through forging and leading the Pig Gene Mapping Project (PiGMaP); developing and hosting resources, such as a database for genetic linkage data; and producing associated statistical and software tools to analyse the data. The Roslin Institute leveraged these resources to play a key role in further international collaborations as a hedge against precarity. This adoption of genomics was strategically useful, as it took advantage of policy shifts at the national and European levels towards funding research with biotechnological potential. As genomics constitutes a set of infrastructures and resources with manifold uses, the development of capabilities in this domain also helped Roslin to diversify as a response to precarity.
This paper aims to clarify the consequences of new scientific and philosophical approaches for the practical-theoretical framework of modern developmental biology. I highlight normal development, and the instructive-permissive distinction, as key parts of this framework which shape how variation is conceptualised and managed. Furthermore, I establish the different dimensions of biological variation: the units, temporality and mode of variation. Using the analytical frame established by this, I interpret a selection of examples as challenges to the instructive-permissive distinction. These examples include the phenomena of developmental plasticity and transdifferentiation, the role of the microbiome in development, and new methodological approaches to standardisation and the assessment of causes. Furthermore, I argue that investigations into organismal development should investigate the effects of a wider range of kinds of variation including variation in the units, modes and temporalities of development. I close by examining various possible opportunities for producing and using normal development free of the assumptions of the instructive-permissive distinction.These opportunities are afforded by recent developments, which include new ways of producing standards incorporating more natural variation and being based on function rather than structure, and the ability to produce, store, and process large quantities of data.
In this paper, we progressively de-center the Human Genome Project (HGP) in the history of genomics and human genomics. We show that the HGP, understood as an international effort to make the human reference genome sequence publicly available, constitutes a specific model of genomics: prominent and influential but nevertheless distinct from others that preceded, existed alongside, and succeeded it. Our analysis of a comprehensive corpus of publications describing human DNA sequences submitted to public databases from 1985 to 2005 reveals a plethora of authoring institutions, with only a few contributing to the HGP. Examining these publications in a co-authorship network enables us to propose two different sequencing approaches—horizontal and vertical sequencing—whose changing dynamics shaped the history of human genomics. We argue that investigating the extent to which different institutions combined these approaches or prioritized one of them captures the history of genomics better than using the categories of large-scale sequence production and sequence use, as much scholarly literature concerning the HGP has done. Sequence production and use became fully distinct only within the HGP model, and especially during the last stages of this endeavor. By exploring a collaboration between Celera Genomics, a large-scale sequencing institution, and two medical genetics laboratories, we show the potential of our co-authorship network and its analysis for historical research. Our study connects the historiographies of medical genetics and human genomics and indicates that the so-called translational gap from sequence data to clinical outcomes may reflect the assumption that genomics was substantially different from prior and parallel genetics research. This essay is part of a special issue entitled The Sequences and the Sequencers: A New Approach to Investigating the Emergence of Yeast, Human, and Pig Genomics, edited by Michael García-Sancho and James Lowe.
This paper examines the model of network genomics pioneered in the late 1980s and adopted in the European Commission-led Yeast Genome Sequencing Project (YGSP). It contrasted with the burgeoning large-scale center model being developed in the United States to sequence the yeast genome, chiefly as a pilot for tackling the human genome. We investigate the operation and connections of the two models by exploring a co-authorship network that captures different types of sequencing practices. In our network analysis, we focus on institutions that bridge both the European and American yeast whole-genome sequencing projects, and such concerted projects with non-concerted sequencing of yeast DNA. The institutions include two German biotechnology companies and Biozentrum, a research institute at Universität Basel that adopted yeast as a model to investigate cell biochemistry and molecular biology. Through assessing these bridging institutions, we formulate two analytical distinctions: between proximate and distal, and directed and undirected sequencing. Proximate and distal refer to the extent that intended users of DNA sequence data are connected to the generators of that data. Directed and undirected capture the extent to which sequencing was part of a specific research program. The networked European model, as mobilized in the YGSP, enabled the coexistence and cooperation of institutions exhibiting different combinations of these characteristics in contrast with the more uniformly distal and undirected large-scale centers. This contributes to broadening the historical boundaries of genomics and presenting a thicker historiography, one that inextricably meshes genomics with the trajectories of biotechnology and cell biology. This essay is part of a special issue entitled The Sequences and the Sequencers: A New Approach to Investigating the Emergence of Yeast, Human, and Pig Genomics, edited by Michael García-Sancho and James Lowe.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.