There is a very old question, when it comes to faculty evaluations and promotions: whether we measure scientific accomplishments by the quality of publications or by the amount of money raised? The question may sound weird, but there is a history of discussions behind it.
In any American research university, there are three main responsibilities of a faculty member: (1) teaching, (2) research, and (3) service. Teaching is obvious and well understood by both academics and the broad community. Service is considered least important of the three, and it includes service for the university on various committees and for the profession (for example, reviewing grant proposals and articles, organizing conferences, editing journals etc.). However, defining research is tricky, because it is not clear who is supposed to fund this activity. And the one who pays usually defines things.
In the current situation of permanent under-funding, all universities, colleges, and departments are seeking external money. An official job description of a faculty member always includes a “create a strong externally funded research program” sentence. Typically, research money is spent on supporting graduate students and postdocs, on materials and equipment, on the administrative costs (the overhead), and similar. External money can come from national agencies (such as the National Science Foundation or the Department of Energy), private or public foundations, local funds, or from industry.
The external money are of great importance for the university. The overhead is about 50% for most of grants and contracts, and the college share of the overhead can be used for hiring part-time instructors (to be specific, account 150 money can be used to hire instructor). As a result, there is a tendency to measure research activity by the amount of external money raised notwithstanding whether the money is research money and from which source it came.
The “grants/contracts (from any source)” are indeed very important. However, raising money is not a scientific research activity. I would call it a “business activity”. It is good, because it brings money for all of us. However, we must keep in mind that the university mission is a research university and not a commercial enterprise. Therefore, we cannot justify our research work by just raising money without creating and disseminating knowledge.
On the other hand, not all research is expensive. There are many problems in mechanics which do not require a lot of experimental equipment and postdoc manpower (or womanpower). Furthermore, graduate students are not always funded by their advisor’s grants. Other sources of funding exist (e.g. teaching assistantships, self-supporting students, industry-supported students, and students with various fellowships).
The question was raised by a senior faculty member: “what if we have a faculty member with zero funding per year, but with high impact factor papers?”
If we have a person with a lot of money, but zero scientific publications, the person is a good businessman (or businesswoman) but not a scientist. If we have a person with a lot of good publications, but zero money, the person is a good scientist but not a good businessperson or manager.
Given that we are talking about scientific/engineering research (remember the “teaching-research-service” triad), business does not fit the definition well. Money is a means of scientific research and not its goal. Publication (spreading new knowledge) is the goal.
Saying that, of course, raising money and creating startup companies is a very important side of engineering and innovation. I am currently on my sabbatical in the Technion in Israel, a self-proclaimed “Startup Nation.” Most STEM professors here either have a startup company or want to create one, since the environment is very encouraging for engineering innovative business.
There is another issue. It is not easy to evaluate the quality of a research publication. There are many journals today, where almost every paper can be published, regardless of the quality or correctness of the publication. Of course, there are reputable journals (usually those with high impact factors or published by respectable learned societies, such as the Royal Society). The importance of a paper (and its author) can also be judged, to some extent, based on how many people cite it, although there are many controversial issues involved in the citation metrics. Consequently, it is much easier to judge somebody’s work by the amount of money it raises and spends, rather than by its lasting impact on the research literature.
As far as our American funding agencies, such as the National Science Foundation (NSF) or the Department of Energy (DOE), my record with them has not been very successful. Therefore, I may be slightly biased, although I am trying to remain fair. In my view, these agencies (or at least some program directors) fund many mediocre projects, while excellent proposals are being rejected. Maybe it is time to help these taxpayer-funded agencies by creating a watchdog group to monitor what they fund and why? If any group of concerned taxpayers is willing to support this activity, I would be only happy to participate.
When you start your work as a new faculty member, you are typically told that obtaining grants is a technical issue, assuming you have good publications, you are known in your area and involved in “hot” (=fashionable) topics, and have new ideas. You should just do the paperwork persistently, and they will give you grants, if you do everything correctly. However, after several years of trying you realize, that these agencies are not seeking new talents. They are not very interested to give their awards to new bright people. Instead, they are overwhelmed with hopeful applicants, many of whom have been around for years, and program directors have promised to them “maybe next year” many times. Many of these applicants are from top20 schools, often from hard sciences (such as chemistry and physics as opposed to engineering). You submit your proposal to a “program A”, which you think fits best you area. And the program director of A answers you: you have a very interesting topic, but I suggest you submitting it to program B. You write to the program director of B, and she or he wants you to submit to a different program. Your ideas are certainly not very welcome, while the agencies increase the paperwork burden in order to discourage the stream of applicants.
Typically, there may be about 30 proposals in every program, and only one or two (rarely three) are funded. Out of these thirty applicants, there are always those who have some good connection to the NSF, for one reason or another (e.g., because they have already applied in the preceding years and were close, and have improved after that, so the program director feels obligated to them or just has no arguments to reject). In addition, the proposals are judged by anonymous reviewers, usually some five professors who have been previously funded by the NSF. Only proposals with all five “excellent” grades can get funded. These referees tend to favor research, which is similar to their own. For example, a chemical engineering professor would consider worthy a research centered around a chemical lab, rather than a mathematical computational study. As a result, if you come from a slightly different field, e.g., you are a mechanical engineer doing research in surface science, you chances to get the excellent grade from all five reviewers approach zero. This is because among referees there will be colloidal chemists and not only mechanical engineers. Therefore, the system essentially marginalizes those who come from a different field and do not have an exact fit to a particular program’s background. And no, NSF programs do NOT cover all research fields.
Consequently, you can turn to private funds. During my sabbatical in Israel as well as during my trips to Russia I was told many stories about “oligarchs” (tycoons or moguls) supporting this or that research, which is certainly great. However, when asked about the motivation of such philanthropy, I was told that in many notable cases this is money-laundering, often semi-legal or gray-zone. Now, the question comes: when we write in our policy documents that “money from any source” is a criterion of scientific accomplishments of a faculty member, is it supposed to encourage participation in various murky financial schemes? Of course, this is a rhetorical question. In my view, scientific activities should be judged by quality publications and their lasting impact on the literature in the research area, and not by the amount of money raised. Certainly, not by “money from any source.”