What are good practices with reusing desalting columns

At least according to a few sources Prozyme and Protocols-Online, it is possible to reuse desalting columns and since I'm cheap I would like to also.

Key things seem to be washing with several column volumes of your buffer of choice. Also certain columns like crosslinked desalting columns are not amenable to regeneration. I'm curious about other factors.

I'm mainly using Bio-Rad micro-spins

Regardless of what protocol you use, and what the advertised efficacy of that protocol might be, in any situation like this I think the important thing to consider is: what would happen if the material taken from a re-used column was contaminated by a previous application? Can you live with the consequences of such contamination?

If you are preparing DNA for further use (PCR and/or cloning and/or transformation) then you run the risk of propagating a contaminant through subsequent steps and getting into a real mess. I worked in a lab once where one postgrad ended up spending several weeks working with a cloned fragment that was actually derived from someone else's work in the same lab (although not due to re-use of a column as far as I remember).

If you are preparing protein samples then the risks are possibly reduced, but if the protein sample is going to be subjected to sensitive methods (blotting, MS) then again, could get messy.

Answer Desk

"Is it possible" has a carefully conditional "yes". Do we have the expertise or technology to do it now? No. One of the biggest difficulties is that nerve tissues (the connections between brain and rest of body) will take time to heal. time during which the brain cannot sufficiently give commands to the rest of the body to keep autonomic functions running.

Head transplants, however, have been "successfully" completed using monkeys. The monkey lived for sometime after the transplant, but eventually died. China has also been known to do a similar procedure with dogs.

Caution, this may be too explicit for some I'm not sure ethics boards would allow it nowadays:
Monkey Head Transplant

Sterile work area

The simplest and most economical way to reduce contamination from airborne particles and aerosols (e.g., dust, spores, shed skin, sneezing) is to use a cell culture hood.

  • The cell culture hood should be properly set up and be located in an area that is restricted to cell culture that is free from drafts from doors, windows, and other equipment, and with no through traffic.
  • The work surface should be uncluttered and contain only items required for a particular procedure it should not be used as a storage area.
  • Before and after use, the work surface should be disinfected thoroughly, and the surrounding areas and equipment should be cleaned routinely.
  • For routine cleaning, wipe the work surface with 70% ethanol before and during work, especially after any spillage.
  • You may use ultraviolet light to sterilize the air and exposed work surfaces in the cell culture hood between uses.
  • Using a Bunsen burner for flaming is not necessary nor is it recommended in a cell culture hood.
  • Leave the cell culture hood running at all times, turning it off only when they will not be used for extended periods of time.

How to reuse dynamic columns in an Oracle SQL statement?

but I think this is kinda ugly. Furthermore I want to make the query somewhat more complex, e.g. reusing 'Q' as well, and I do not want to create yet another subquery.

Update: The reason I want to store the calculation of 'P' is that I want to make it more complex, and reuse 'P' multiple times. So I do not want to explicitly say 'A*2+5 AS Q', because that would quickly become to cumbersome as 'P' gets more complex.

There must be a good way to do this, any ideas?

Update: I should note that I'm not a DB-admin :(.

Update: A real world example, with a more concrete query. What I would like to do is:

for now, I've written it out, which works, but is ugly:

I could do all of this after receiving the data, but I thought, let's see how much I can let the database do. Also, I would like to select on 'BSA' as well (which I can do now with this query as a subquery/with-clause).

Update: OK, I think for now I finished with Cade Roux' and Dave Costa's solution. Albeit Pax' and Jens Schauder's solution would look better, but I can't use them since I'm not a DBA. Now I don't know who to mark as the best answer :).

BTW, in case anyone is interested, SB is the 'surface brightness' of galaxies, for which B and D are correction terms.

11th International Symposium on Process Systems Engineering

2.2.2 Impurity diverted path

Purification causes not only pressure drop, but also hydrogen loss in terms of tail gas, which is discharged to the fuel gas system. Hence, unnecessary purification should be avoided.

Reasonable mix can increase direct reuse. Impurity diverted path is a way to mix internal sources with each other or fresh hydrogen, which has low purity in any impurity. In this way, internal sources with high impurity purities can be diluted by other sources with low purities of corresponding impurities, so as to make the sources with high impurity purities feasible for direct reuse.

For example, if part of internal source 1 can be mixed with source 2 or source 3 to dilute some impurities, then such part of internal source 1 can be directly sent to sink 2 instead of sent to purification reuse. Similarly, the mixture of internal source 2 and source 3 sent to sink 3 could also be such case.

In addition, there are still some other internal sources, which cannot be directly sent to sinks after mix because of low pressure, such as internal source 3. As there is no other sinks with lower pressure than that of internal source 3, it can only be mixed with fresh hydrogen and then get direct reuse after pressurized by compressors.

Thus, A, B, C, D, E and F are all possible impurity diverted path and feasible ones can be determined according to practice consideration.

With both pressure cascade use and impurity diverted path, the possible direct reuse of hydrogen sources can be taken into consideration, and the fresh hydrogen consumption can be reduced.


You may start working on projects by yourself or with a small group of collaborators you already know, but you should design it to make it easy for new collaborators to join. These collaborators might be new grad students or postdocs in the lab, or they might be you returning to a project that has been idle for some time. As summarized in [steinmacher2015], you want to make it easy for people to set up a local workspace so that they can contribute, help them find tasks so that they know what to contribute, and make the contribution process clear so that they know how to contribute. You also want to make it easy for people to give you credit for your work.

Create an overview of your project. (3a) Have a short file in the project's home directory that explains the purpose of the project. This file (generally called README , README.txt , or something similar) should contain the project's title, a brief description, up-to-date contact information, and an example or two of how to run various cleaning or analysis tasks. It is often the first thing users and collaborators on your project will look at, so make it explicit how you want people to engage with the project. If you are looking for more contributors, make it explicit that you welcome contributors and point them to the license (more below) and ways they can help.

You should also create a CONTRIBUTING file that describes what people need to do in order to get the project going and use or contribute to it, i.e., dependencies that need to be installed, tests that can be run to ensure that software has been installed correctly, and guidelines or checklists that your project adheres to.

Create a shared "to-do" list (3b). This can be a plain text file called something like notes.txt or todo.txt , or you can use sites such as GitHub or Bitbucket to create a new issue for each to-do item. (You can even add labels such as "low hanging fruit" to point newcomers at issues that are good starting points.) Whatever you choose, describe the items clearly so that they make sense to newcomers.

Decide on communication strategies. (3c) Make explicit decisions about (and publicize where appropriate) how members of the project will communicate with each other and with externals users / collaborators. This includes the location and technology for email lists, chat channels, voice / video conferencing, documentation, and meeting notes, as well as which of these channels will be public or private.

Make the license explicit. (3d) Have a LICENSE file in the project's home directory that clearly states what license(s) apply to the project's software, data, and manuscripts. Lack of an explicit license does not mean there isn't one rather, it implies the author is keeping all rights and others are not allowed to re-use or modify the material.

We recommend Creative Commons licenses for data and text, either CC-0 15 (the "No Rights Reserved" license) or CC-BY 16 (the "Attribution" license, which permits sharing and reuse but requires people to give appropriate credit to the creators). For software, we recommend a permissive open source license such as the MIT, BSD, or Apache license [laurent2004].

What Not To Do

We recommend against the "no commercial use" variations of the Creative Commons licenses because they may impede some forms of re-use. For example, if a researcher in a developing country is being paid by her government to compile a public health report, she will be unable to include your data if the license says "non-commercial". We recommend permissive software licenses rather than the GNU General Public License (GPL) because it is easier to integrate permissively-licensed software into other projects, see chapter three in [laurent2004].

Make the project citable (3e) by including a CITATION file in the project's home directory that describes how to cite this project as a whole, and where to find (and how to cite) any data sets, code, figures, and other artifacts that have their own DOIs. The example below shows the CITATION file for the Ecodata Retriever 17 for an example of a more detailed CITATION file, see the one for the khmer 18 project.

Many ecology and evolution journals have recently adopted policies requiring that data from their papers be publicly archived. I present suggestions on how data generators, data re-users, and journals can maximize the fairness and scientific value of data archiving. Data should be archived with enough clarity and supporting information that they can be accurately interpreted by others. Re-users should respect their intellectual debt to the originators of data through citation both of the paper and of the data package. In addition, journals should consider requiring that all data for published papers be archived, just as DNA sequences must be deposited in GenBank. Data are another valuable part of the legacy of a scientific career and archiving them can lead to new scientific insights. Archiving also increases opportunities for credit to be given to the scientists who originally collected the data.

We use cookies to help provide and enhance our service and tailor content and ads. By continuing you agree to the use of cookies .

Rule 5: Describe How Data Quality Will Be Assured

Quality assurance and quality control (QA/QC) refer to the processes that are employed to measure, assess, and improve the quality of products (e.g., data, software, etc.). It may be necessary to follow specific QA/QC guidelines depending on the nature of a study and research sponsorship such requirements, if they exist, are normally stated in the RFP. Regardless, it is good practice to describe the QA/QC measures that you plan to employ in your project. Such measures may encompass training activities, instrument calibration and verification tests, double-blind data entry, and statistical and visualization approaches to error detection. Simple graphical data exploration approaches (e.g., scatterplots, mapping) can be invaluable for detecting anomalies and errors.


This paper presents both theoretical and practical contributions to the study of scientific data reuse. In terms of theory, we developed a research model for how scientists’ normative perceptions and attitudes influence their data reuse behaviours. Therefore, this study offers a theory that can inform future studies on factors that may encourage or hinder scientists from reusing data collected by others.

We can draw some practical implications from our data to suggest what might be done to encourage scientists to reuse the increasing volumes of data being shared. First, given the importance of the perceived efficacy and to a lesser extent efficiency of data reuse, we suggest that these values be widely demonstrated to make them more apparent to more researchers. For example, a project like DataONE can provide concrete demonstrations of how data reuse can enable researchers to answer their current or new questions effectively and efficiently. Such demonstrations might take several forms, such as YouTube video case studies of data reuse Jupyter notebooks demonstrating the process of reusing data exemplary data reuse papers or a combination of the above. Such materials could help reduce the initial barrier to data reuse by demonstrating in a more practical and palpable way its value. Additional materials could provide more specific training in the elements of data reuse, such as data discovery or use of metadata to help understand a dataset and its provenance. Another topic that seems to need better attention is data citation. Building and maintaining policies, guidelines and services for appropriate attribution and formal citation of datasets are key to leverage data sharing and to legitimize the reuse of these research artifacts.

While the literature emphasizes trust as an important factor in data reuse (or in deterring data reuse in the case of a lack of trust), this factor was not found to be significant in our study. It may be that the efforts of data repositories to establish the trustworthiness of candidate data for reuse and to encourage good metadata for shared data is paying off in the attitudes of the sample of scientists who completed the survey.

Finally, our results suggest the need to address norms about data reuse that encourage or discourage this practice. However, it must be recognized that norms are by their nature hard to change. Possible avenues for influence include having visible and established members of a field, publicly and also privately (e.g. through reviews or tenure letters) advocating for the value and acceptance of data reuse as a worthy research practice. Acceptability could be further shaped by recognition of good data reuse, such as awards for exemplary data reuse papers or other compensation mechanisms for those who manage expanding scientific discovery through reuse. Actions addressing attitudes and subjective norms to increase data reuse will be key to fully realizing the potential of research data and legitimize the investments and policies in favor of data sharing.

One location where data sharing and reuse are currently standard practice is the Synthesis Center. Synthesis Centers fund teams of people to tackle complex ecosystem science questions exclusively using existing data, and therefore can serve as a model for the promotion of data sharing and reuse in other scientific fields This replaces the model of the traditional research workflow which pivots around the collection of new data, with one of team science, giving ‘new life to old and dark data’ [19, 22, 34, 61, 62]. This has resulted in high productivity in terms of research outputs in fact, the Synthesis Centre NCEAS, is one of the most highly cited research units in the United States of America, with over 2,500 articles [63] across its more than 20 years of existence, 12% in high impact journals.

There is promising evidence both here and in previous research that the case for data reuse is being heard. Incentivizing and normalizing this emerging model of research practice remain fruitful areas of inquiry, as well as the need to bridge the apparent gap between sharing and reuse behaviours.

Since this research was based on published dataset a next step for advancing towards a more comprehensive understanding of the reuse of data would be to identify other dimensions and factors that may prevent and/or prompt scientists’ data reuse behaviours. This study encourages future research to consider not only reusing the theoretical model and measurements proposed by this study, but to expand on them, by adding constructs such as intention to reuse, and facets this research might have overlooked or that could not be assessed due to the limitations of the data at hand. Finally, future research should examine the antecedent of attitudes and subjective norms suggested by TRA to determine which are most influential, to develop recommendations that address these factors and so promote data reuse.

This technique works best when the lecture content is heavy and you need to organize it in a structured and easy form. It can also be used when you have no idea about the content of the lecture to be presented.

  • Visually appealing
  • Can be used for noting down detailed information but in a concise form
  • Allows easy editing of the notes
  • While mapping your notes, you might run out of space on a single page
  • Can be confusing if the information is wrongly placed while taking notes