Back in the day -- back before I had the carefully regulated, federally mandated, and strictly enforced lobotomy that allows people into the ranks of management -- I was once an analyst. And, since it seems so long ago that it doesn't sound like bragging anymore, I will admit that I was pretty good at it. I absolutely loved the process of using fundamental physics or empirical correlations for fluid dynamics, thermodynamics, and heat transfer all together to simulate in computer coding how things function in the real world. Whereas many people enter the field engineering because they like mechanical things or electronic things, there are some of us who relish the seeming purity of problem solving in abstraction.
Over the years, working on many diverse projects and building many diverse mathematical models to simulate many diverse systems, I came to the realization that my models always appeared most unassailable and brilliant when there was no test data against which to compare them. To put it bluntly, test data always proves that you simply ain't as smart as you thought you were. But, that's okay. If that wasn't the case, then you wouldn’t bother to test. The whole point of testing to gather data and learn more. With all due respect to the Serenity Prayer, this ought to be the analyst's prayer relative to testing:

»» READMORE...
Over the years, working on many diverse projects and building many diverse mathematical models to simulate many diverse systems, I came to the realization that my models always appeared most unassailable and brilliant when there was no test data against which to compare them. To put it bluntly, test data always proves that you simply ain't as smart as you thought you were. But, that's okay. If that wasn't the case, then you wouldn’t bother to test. The whole point of testing to gather data and learn more. With all due respect to the Serenity Prayer, this ought to be the analyst's prayer relative to testing:
That last line is critical. Ignoring data contrary to what your model output is a seductive, addictive, and dangerous path to follow. We don't/won't do that. That brings us to the subject of test A2J003 of the J-2X development engine E10001. This was our first test to mainstage operation. The planned duration was to be seven seconds. On Tuesday 26 July, right around five in the afternoon, the test ran for 3.72 seconds and then shutdown. We did not accomplish the full duration. Why? Basically because we ain't as smart as we thought we were. We had analytical models telling us that performance would be X, but the hardware knew better and decided on its own to perform to Y. Here is a cool video of the test: A more detailed explanation of what happened is that the engine shutdown due to the measurement of a pressure too high in the main combustion chamber. The measurement crossed a pre-set "redline" and the test controller unit took the automatic (and autonomous) action of shutting down the engine in a safe and controlled manner. The high pressure in the main chamber was caused by higher than expected power coming out of the oxidizer pump. This, in turn, was due to more power being delivered to the turbine side than expected. It comes down to a fluid dynamics phenomenon (pressure drops) and what we have is not inherently bad, just different than expected. So, in essence, we used our models to predict that the pressure in the main chamber would be at a certain level -- indicating a certain power level -- but the different performance of the hardware resulted in pushing us away from our analytical prediction.Grant me -- -- the results to validate that which I do understand -- the data to explain that which I did not understand -- and the openness to accept that I can always understand better
If that, then, is the bad part, I can live with it. I can live with admitting that we ain't as smart as we thought were. Why? Because now, after the test, we are indeed smarter. And we will continue to get smarter and smarter about the J-2X design until, one day, we will be smart enough to say that, yes, we understand this engine so well that it is safe enough to propel humans into space.
- Here is the good part: We learned something. We learned that our model needs to be updated and we collected the data that will allow that to happen.
- Here is another good part: We got enough data, despite the short duration, to recalibrate the engine for the next test thereby making it far more likely that we will hit our target. .
- Here is yet another good part: We had a successful demonstration of the test controller redline system by safely shutting down the engine. The engine looks fine. The controller did exactly what it was supposed to do and protected the hardware. In fact, for these early tests we have the redlines clamped down pretty tight specifically to protect the hardware as we learn more about the engine..
- And here is, finally, yet another good part: Other than the power applied to the oxidizer turbopump, most of our other predictions with regards to hardware performance appear to be awfully darn good. So, we've got a preliminary validation for much of our understanding of the engine. Indeed, this is a brand new engine and we have just accomplished mainstage operation in the second hot-fire test. That is truly unprecedented..
- Here is the bad part: We have to spend a few minutes explaining to folks not directly involved that despite not achieving full duration, the test was in reality a total success.



















The first milestone gate in the life cycle is Authority to Proceed (ATP). For activities as substantial as rocket engine development, we don’t go off willy -- nilly and decide for ourselves when to start such a thing. While that might be fun for a little while, it is a generally accepted and overarching rule -- of -- thumb that prison is something we should to endeavor to avoid. There is a whole chain of command that ultimately flows down from Congress and the Administration. Thus, at ATP we essentially get a formal charter to fulfill certain objectives such as, in this case, develop an engine. Also, with that charter comes the authority to spend time and money in pursuit of the objectives. The authority part is key. And so, not much can happen until ATP.
Next, you have the milestone of the System Requirements Review (SRR). Before you can design something, you need to know what it is that you need and expect it to do. Now, some requirements development had better have happened at a higher level before you even began the project -- otherwise, how would you even know that you need to start developing a rocket engine versus, say, a rocking chair? -- but getting a set of valid, system-level functional, performance, physical, and safety requirements is extremely important. In addition to the system requirements pertaining to the engine, the other stuff that is part of the SRR is a whole slate of the programmatic documentation that forms the infrastructure for the organization responsible for the development effort. This review therefore establishes the foundation of the product and the processes for the development effort to follow.
The first true design review is the Preliminary Design Review (PDR). The question at this review is, at its root, pretty simple: Do we have the right design? Now that we've spent some time doing analyses, performing trade studies between subsystem design concepts, perhaps running component or subscale tests, laying out the initial drawings, can we say with sufficient confidence that we have a functional design that meets our technical requirements within the limits of the time and money we've been allocated? To me, more than at any other moment in the development life cycle, this is that pivotal point. If you fail here, everything stops, as it should. It means that you've been working on the wrong design. Start over. However, if you are successful, then you start ordering materials to start building the engine and you commit to completing the design. The stakes are very, very high.
The next milestone is an interesting one. For many, many development efforts other than rocket engines, the Critical Design Review (CDR) is The Design Review. For these other projects, it represents a true completion point for the final, to-be-flown design and it often takes place after prototype fabrication and testing. But for a rocket engine development effort, the CDR takes on a somewhat different flavor. This is because just building a rocket engine for testing can take several years and the necessary extensive test program that can also consume a couple of more years of activity. Thus, for a rocket engine, the CDR is focused on getting the right design into the test stand. The question to be answered is this: Are we still on the right path, with the design essentially finished, to commit to the rest of the effort? Therefore, you review not only the design (and associated matured analyses) but also all of the planning documentation that explains how the engines that you are already building will be used to demonstrate that the design meets all of the imposed requirements. In essence, you have to prove that all of your ducks are in a row because now you're getting into some serious welding, grinding, shaping, and cutting of metal.
The final, formal milestone for a rocket engine development project is the Design Certification Review (DCR). At the end of this review process, the engine is declared to be certified for spaceflight. So, what is reviewed? It is a combination of several things. The first part is known as a physical configuration audit. This is basically a demonstration that you can (and have, and will continue to in the future) build what the design drawings prescribe. The second part is known as the functional configuration audit. This is a review of all of the collected evidence demonstrating that the engine fulfills all of the functional, performance, physical, and safety requirements imposed at the very beginning. The evidence is a wide array of test data, analyses, and/or inspection results. And the final portion of the DCR is a review and approval of all of the other products related to the engine design. These include operational manuals and, most importantly, all of the reliability and safety analyses and assessments, plus an assessment of the quality and configuration management systems in place to ensure continued high standards during subsequent engine production. Overall, when you complete a successful DCR, you are stamping the entire development effort as a success and pointing towards a future of successful launch operations. It is therefore both an endpoint and a point of transition. 



