Abstracts FJ/OH-SS
2005
Topic
1 :
Lecturer: Prof Forrest B. BROWN
(
1.1 Overview of
1.2. State-of-the-art
1.3 Current Research Challenges &
Perspectives
Solving particle transport problems with the
Topic
2 : Novel Types of Integral Experiments in
Zero-power Reactors
2.1
Source-driven Sub-critical Experiments
Lecturer: Dr George R.
IMEL (
There are currently several major research programs
in the world studying Accelerator Driven Systems (ADS) that have revived the
interest in the physics of sub-critical systems. Among these, the European
programs MUSE (Multiplication Avec Source Externe) in
France and TRADE (Triga Reactor Accelerator Driven
Experiment) in
There are two broad groups of experiments that
are of primary interest: core characterisation and
reactivity measurement. Core characterisation
includes such measures as fission rate traverses, beta/lambda, source
importance, and spectral indices. These topics will be briefly covered, as they
are not so unique to sub-critical systems. More coverage will be devoted to the
methods of reactivity measurement and monitoring, as it is here that the
problems become much more difficult compared to critical systems.
Thus, we will investigate methods in the
following sub-groups: static, quasi-static, and dynamic. Primary in the static
category is the source multiplication method, which uses the proportionality of
detector count rates to infer the reactivity. Quasi-static methods can be
considered as static in the macroscopic sense, but dynamic in the microscopic
sense. These include noise and cross-correlation techniques, such as Feynman-
and Rossi-alpha. Dynamic techniques include rod-drops, source jerks, and pulsed neutron source.
The basic theories of the above mentioned
techniques will be introduced, and the application will be demonstrated with
real experimental data from MUSE and TRADE. The problems of extrapolating to a
larger scale ADS will also be covered.
2.2
Experiments Employing Power-reactor Fuel
Lecturer: Prof
Rakesh CHAWLA (PHB Ecublens,
Swiss Federal Institute of
Short and medium term trends in nuclear power
development are dictated, to a considerable extent, by the demands for improved
economy, more efficient fuel cycle strategies and greater operational
flexibility for current and future light water reactors (LWRs).
One of the principal consequences of corresponding utility projects ─ aiming, for example, at higher discharge burnups for the fuel and/or increased cycle lengths ─ is that fuel assembly and reactor core designs
have become more and more complex, thereby evoking the need for new validation
efforts for the reactor physics calculational tools
employed.
Since several years, a research programme, LWR-PROTEUS, has been underway in the above
context at the Paul Scherrer Institute in
The present coverage of this novel type of
experimentation in zero-power reactors relates mainly to the three different
phases of the LWR-PROTEUS programme, viz. (i) the investigation of reaction rate distributions and rod
removal worths in highly heterogeneous BWR fuel
assemblies, (ii) the assessment of reactivity effects in highly burnt PWR and
BWR fuel, and (iii) void coefficient studies for advanced BWRs.
A brief overview will also be presented of LIFE@PROTEUS, an R&D programme envisaged for the period 2007-2011, in which a
novel database for heterogeneous core loadings will be generated employing
significant quantities of highly burnt fuel.
Topic
3 : High Burn-up Fuels for LWRs
3.1 Motivations
and Physics Consequences
Lecturer: Dr Kevin W. HESKETH
(Nexia Solutions,
The OECD Nuclear Energy Agency (NEA) currently has an
Expert Group which is considering very high burn-up fuel in Light Water
Reactors (LWRs). This was an initiative that was
taken by the NEA Nuclear Science Committee, which considered that very high
burn-ups in LWRs was an
important current issue that needed to be reviewed systematically. At the time
of this lecture, the Expert Group has met twice and is drafting its report,
which attempts to consider all the issues raised by very high burn-ups and to
indicate how these may be addressed. The final meeting will be held in October
at which the draft will be finalised for publication.
Much of the material in this lecture reflects the discussions held within the
Expert Group and gives you an example of how international organisations
can contribute to the continued development of nuclear power.
The lecture looks at the factors that would
motivate LWR utilities to extend average discharge burn-ups well beyond current
values (taken to be 60 to 100 GWd/t).
In particular, it the lecture considers fuel cycle costs and asks whether there
is a direct economic incentive for very high burnups.
The lecture the considers the impact of very high burn-ups on the nuclear
physics aspects and how these technical factors themselves will impact on
operations and costs.
The other lectures will develop the theme of
very high burn-ups further by looking at the impact on fuel behaviour,
operation and safety. The entire subject is a very complicated one with many
different technical, economic and strategic factors which interact with one
another. As with many questions in the nuclear industry, the inherent
complexity means that there are no simple answers and not necessarily any one
single picture that applies to all countries that operate LWRs.
It illustrates how technical questions become intimately mixed with strategic
and economics ones and how important it is for nuclear scientists and engineers
to develop an awareness of these other areas.
3.2 Fuel
Performance, Limits, Operational and Safety Issues
Lecturer: Dr Michel R.
BILLAUX (
The nuclear fuel designer is
confronted with a number of safety issues that affect the integrity of the fuel
elements. The most important
of these issues are described in the first part
of the lecture.
The objective of the
mechanical design criteria is to resolve the safety issues and prevent fuel rod
failures in normal operation and incidental conditions. These criteria are imposed by
the safety authorities. They vary from country
to country. In the U.S. they set constraints
on the rod inner pressure, cladding tangential deformation, cladding creep collapse, cladding fatigue, and cladding oxidation. Fuel melting is not
allowed. The mechanical fuel design criteria used in the U.S. are presented in the second part.
The third part is devoted
to the thermal-mechanical high burnup effects that
may affect cladding integrity and have to be considered in mechanical design
calculations.
In the fourth part, the
different PCI-failure mechanisms are briefly described: stress-corrosion
cracking as well as delayed hydride cracking and brittle hydride failure, which
occur only at high burnup.
The use of fuel performance
codes and experimental databases is discussed in the last part of the lecture.
3.3
Physics Properties of Fuels at High-Burn-ups
Lecturer: Dr
Vincenzo V. RONDINELLA (Institute for Transuranium Elements,
During its in-pile life, each atom in the nuclear fuel
experiences a few thousand displacements from its initial lattice position. In
spite of this dramatic occurrence, typical LWR oxide fuel at the end of the
irradiation cycles still retains mechanical integrity and a crystalline
structure. However, its physical properties have undergone significant
alterations under the effect of radiation damage, of power and temperature
profiles, and of accumulation of fission and neutron absorption products.
The amount of defects in the fuel structure (point and
extended defects, micro and macro bubbles, solute and segregating impurities)
accumulating with increasing burn-up will translate in significant alterations
of important quantities like thermal conductivity, density and mechanical
properties. Fission gas production will contribute to fuel swelling and
eventual pressurization of the fuel rod.
At medium-high burnup, the
accumulation of fission events will finally produce a restructuring of the fuel
structure, through grain subdivision and redistribution of gases and defects.
The properties of the newly formed structure, the so-called rim (or high
burn-up) -structure will characterize the overall quality and performance of
the fuel.
These lectures will describe relevant properties
of high burn-up LWR fuel, including the main characterization tools to
investigate these fuel materials, and an overview about current views on the
formation and consequences of the rim-structure.
Topic
4 : Fuel Behaviour
During Design-basis Accidents in LWRs
4.1
LWR Physics and Thermo-hydraulics during DBA (LOCA,
RIA, ATWS)
Lecturer: Dr
David J. DIAMOND (Brookhaven National
Three types of design-basis accidents will be
discussed; namely, reactivity initiated accidents, loss-of-coolant accidents
and anticipated transients without scram (ATWS). Each will be introduced by
explaining the sequence of events and the most important regulatory acceptance
criteria. The neutronic and thermal-hydraulic tools
used for analysis of these events will be explained in general and then the
codes PARCS, RELAP5, and TRACE will be considered. Sample calculations for
pressurized and boiling water reactors will be presented with the emphasis on
results that are most germane to fuel behaviour.
Lecturer: Dr
Toyoshi FUKETA (Japan Atomic Energy Research Institute,
Fuel behaviours during
two types of design-basis accidents, reactivity-initiated accident (RIA) and
loss-of-coolant accident (LOCA), will be described and discussed. The sequence
of fuel behaviours during each accident will be
introduced and thermo-mechanical phenomena in each phase will be explained. As
for fuel behaviours in an RIA, the phenomena includes
PCMI (pellet/cladding mechanical interaction) and resulting cladding failure in
early phase, ballooning and rupture in post-DNB (departure from nucleate
boiling) phase, fission gas release, and fuel fragmentation and mechanical
energy generation as post-failure events. Regarding rod behaviour
in a LOCA, ballooning and rupture in blow-down phase, oxidation and hydrogen
absorption at high temperature, rod fracture at quench, pellet relocation are
discussed. The lecture includes introduction of pre-existing and on-going
research program, currently available database, and some examples of the most
important regulatory criteria.
4.3
Regulations and Associated Methodologies
Lecturer: Dr
Georges HACHE (Institut de Radioprotection et de Sûreté Nucléaire, France)
According to the defense-in-depth approach, in the design and licensing of light-water reactors, it is postulated
that a small set of low-probability accidents will occur, and it
is required that the reactor
be able to accommodate or mitigate their consequences without affecting the public health and safety.
Examples of such postulated accidents are the loss-of-coolant accidents (LOCA) and
the reactivity-initiated
accidents (RIA). The characteristics
of these accidents serve to set the
requirements for a number
of the reactor components
or safety systems. The regulatory criteria and associated
evaluation models, their history, will be
described. They were established mainly in the 70s, when fuel burnup was limited and
zircaloy cladding was used in western countries. In
the mid 1990s, the safety authorities
learned that some regulatory criteria, which have been used to ensure benign behavior of these accidents, might not be adequate at
high burnups. Further, there were questions about the applicability of these criteria for new cladding alloys being introduced
by the industry. Faced with these
concerns, research programs were initiated
to investigate the effects of high burnup and new cladding alloys. Despite the fact
that these programs are not finished, tendencies to revise some criteria and
evaluation models will be described.
4.4
Open Issues, On-going and Planned Research
Lecturer: Dr
Wolfgang WIESENACK (Organization for Economic Co-operation and Development,
Increased
fuel discharge burnups and uprated
nuclear power plants pose new challenges for fuel performance in both normal
and off-normal conditions. Considerable efforts are therefore made worldwide to
assess safety margins and provide data in support of existing safety criteria
or their revision. In this context, the following items will be addressed:
1 Introduction
2 Fuel and
cladding developments with an impact on safety margins and safety criteria
3 Loss-of-Coolant
Accident – LOCA
3.1 Open
issues
3.2 On-going
and planned safety research
3.3 New
safety criteria
4 Reactivity
Insertion Accident – RIA
4.1 Open
issues
4.2 On-going
and planned safety research
4.3 New
safety criteria
5 Testing
methodology
5.1 RIA
simulation in test reactors
5.2 LOCA
hot lab and in-core testing methodology
6 Supporting
code developments
6.1 Whole
core / system codes
6.2 Rod
or bundle codes
7 Conclusion
The content is based on input from major
research organisations and the work and reports produced by the NEA-CSNI
“Special Experts Group on Fuel Safety Margins”.
Topic
5 : Impact of Uncertainties on Code Predictions
5.1
Propagation of Uncertainties in Core Neutronics
Lecturer: Prof
Massimo SALVATORES (Commissariat à l’Energie Atomique, France)
1. Introduction
2. A formal approach to propagate
uncertainties, based on Generalized Perturbation
Theory
3. The problem of
covariance data
4. Application
to Generation-IV systems
5. The
potential impact on design assessment
6. The role
and definition of target accuracies
7. A new field of
application: the system analysis codes
8. Conclusions
5.2
Coupled Simulations Accounting for Uncertainties
Lecturer: Prof Dan G. CACUCI (
A physical system is modeled mathematically in terms
of: (a) linear and/or nonlinear equations that relate the system's independent
variables and parameters to the system's state (i.e., dependent) variables, (b)
inequality and/or equality constraints that delimit the ranges of the system's
parameters, and (c) one or several quantities, customarily referred to as
system responses (or objective functions, or indices of performance) that are
to be analyzed as the parameters vary over their respective ranges. Models of
complex physical systems usually involve two distinct sources of uncertainties,
namely: (i) stochastic
uncertainty, which arises because the system under investigation can behave
in many different ways, and (ii) subjective
or epistemic uncertainty, which arises from the inability to specify an
exact value for a parameter that is assumed to have a constant value in the
respective investigation. Epistemic (or subjective) uncertainties characterize
a degree of belief regarding the location of the appropriate value of each
parameter. In turn, these subjective uncertainties lead to subjective
uncertainties for the response, thus reflecting a corresponding degree of
belief regarding the location of the appropriate response values as the outcome
of analyzing the model under consideration. A typical example of a complex
system that involves both stochastic and epistemic uncertainties is a nuclear
reactor power plant: in a typical risk analysis of a nuclear power plant,
stochastic uncertainty arises due to the hypothetical
accident scenarios which are considered in the respective risk analysis,
while epistemic uncertainties arise because of uncertain parameters that
underlie the estimation of the probabilities and consequences of the respective
hypothetical accident scenarios.
Sensitivity and uncertainty analysis procedures can be
either local or global in scope. The objective of local analysis is to analyze the behavior of the system response
locally around a chosen point (for static systems) or chosen trajectory (for
dynamical systems) in the combined phase space of parameters and state
variables. On the other hand, the objective of global analysis is to determine all of the system's critical points
(bifurcations, turning points, response maxima, minima, and/or saddle points)
in the combined phase space formed by the parameters and dependent (state)
variables, and subsequently analyze these critical points by local sensitivity
and uncertainty analysis. The methods for sensitivity and uncertainty analysis
are based on either statistical or deterministic procedures. In principle,
both types of procedures can be used for either local or for global sensitivity
and uncertainty analysis, although, in practice, deterministic methods are used
mostly for local analysis while statistical methods are used for both local and
global analysis. It is important to note that all statistical uncertainty and
sensitivity analysis methods first commence with the “uncertainty analysis”
stage, and only subsequently proceed to the “sensitivity analysis” stage; this
procedural path is the reverse of the procedural (and conceptual) path
underlying the deterministic methods of sensitivity and uncertainty analysis, where
the sensitivities are determined prior to using them for uncertainty analysis.
In practice, sensitivities cannot be computed exactly
by using statistical methods; this can be done only by using deterministic methods. The deterministic
methods most commonly used for computing local sensitivities are the
“brute-force” method based on recalculations, the direct method (including the
decoupled direct method), the Green’s function method, the forward sensitivity
analysis procedure (FSAP), and the adjoint sensitivity analysis procedure (ASAP). The direct method and the FSAP require at least as many
model-evaluations as there are parameters in the model, while the ASAP requires a single model-evaluation
of an appropriate adjoint model, whose source term is
related to the response under investigation. The ASAP is the most efficient method for computing local sensitivities
of large-scale systems, when the number of parameters and/or parameter
variations exceeds the number of responses of interest. The adjoint
model requires relatively modest additional resources to develop and implement
if this is done simultaneously with the development of the original model. If,
however, the adjoint model is constructed a posteriori, considerable skills may be
required for its successful development and implementation.
Once they become available, the exact local
sensitivities can be used for the following purposes: (i)
understand the system by highlighting important data; (ii) eliminate
unimportant data; (iii) determine effects of parameter variations on the
system’s behavior; (iv) design and optimize the system (e.g., maximize
availability/minimize maintenance); (v) reduce over-design; (vi) prioritize the
improvements to be effected in the respective system; (vii) prioritize introduction
of data uncertainties; (viii) perform local uncertainty analysis by using the method of “propagation of errors” (also known as
the “propagation of moments,” or the “Taylor-Series”).
To begin with, this Lecture provides a brief
description of selected definitions and considerations underlying the theory
and practice of measurements and the errors associated with them. After
reviewing the main sources and features of errors, the current procedures for
dealing with errors and uncertainties are presented for direct and for indirect
measurements, to set the stage for a fundamental concept used for assessing the
magnitude and effects of errors both in complex measurements and computations.
The practical consequences of this fundamental concept are embodied in the “propagation of errors (moments)” equations.
The propagation of errors equations provides a systematic way of obtaining the
uncertainties in results of measurements and computations, arising not only
from uncertainties in the parameters that enter the respective computational
model but also from numerical approximations. Furthermore, the “propagation of
errors” equations combine systematically and consistently the parameter errors
with the sensitivities of responses (i.e., results of measurements and/or
computations) to the respective parameters, thus providing the symbiotic
linchpin between the objectives of uncertainty analysis and those of
sensitivity analysis.
Historically, the development of large-scale
simulation models took many years and invariably involved large, and sometimes
changing, teams of scientists. Furthermore, such complex models consist of many
inter-coupled modules, each module simulating a particular physical
sub-process, serving as “bricks” within the structure of the respective
large-scale simulation code system. Since the adjoint
sensitivity analysis procedure (ASAP)
has not been widely known in the past, most of the extant large-scale, complex
models have been developed without also having simultaneously developed and
implemented the corresponding adjoint sensitivity
model. Implementing a posteriori the ASAP for large-scale simulation codes is
not trivial, and the development and implementation of the adjoint
sensitivity model for the entire large-scale code system can seldom be executed
all at once, in one fell swoop. Actually, an “all-or-nothing” approach for
developing and implementing the complete, and correspondingly complex, adjoint sensitivity model for a large-scale code is at best
difficult (and, at worst, impractical), and is therefore not recommended.
Instead, the recommended strategy is a module-by-module implementation of the ASAP. In this approach, the ASAP is applied step-wise, to each
simulation module, in turn, to develop a corresponding adjoint
sensitivity system for each module. As the final step in this “modular”
implementation of the ASAP, the adjoint sensitivity systems for each of the respective
modules are “augmented,” without redundant effort and/or loss of information,
until all adjoint modules are judiciously connected
together, accounting for all of the requisite feedbacks and liaisons between
the respective adjoint modules.
This Lecture also sketches the theoretical foundation
for the modular implementation of the
ASAP for coupled, complex simulation
code systems; this modular approach commences with a selected code module, and
then proceeds by augmenting the size of the adjoint
sensitivity system, module by module, until exhaustively completing the entire
coupled system under consideration. Finally, this Lecture concludes by
presenting an illustrative application of the coupled adjoint
fluid dynamics/heat structure sensitivity model, ASM-REL/TFH, which was
developed within the large-scale safety analysis code RELAP5/MOD3.2, for an
efficient sensitivity analysis of the QUENCH-04 experiment performed at the
Research Center Karlsruhe (FZK).
REFERENCES
1. D.G. Cacuci, Sensitivity and Uncertainty Analysis:
Theory, Volume 1, Chapman &
Hall/CRC,
2. D.G. Cacuci, M. Ionescu-Bujor, and M.I. Navon, Sensitivity and Uncertainty Analysis:
Applications to Large Scale Systems, Volume
2, Chapman & Hall/CRC,
3. D.G. Cacuci, M.I. Navon, and M. Ionescu-Bujor, Sensitivity and Uncertainty Analysis: Data
Adjustment and Assimilation, Volume 3,
Chapman & Hall/CRC,
5.3
Best-estimate Safety Analysis
Lecturer: Dr Eric CHOJNACKI
(Institut de Radioprotection et de Sûreté
Nucléaire, France)
Best- estimate codes are designed to provide
unbiased and physically realistic results contrary to conservative codes. Due
to input uncertainties and other uncertainty sources as for example the lack of
knowledge in physical phenomena, modelling the
calculation results obtained from these ‘best-estimate’ or advanced computer
codes are also known with some imprecision. As these best-estimate codes are
increasingly used for accidental management procedures and planned to be used
for licensing purposes, it became of prime importance to be able to quantify
their uncertainties. Thus, OECD/CSNI have supported an Uncertainty Methods
Study (UMS) to compare different uncertainty methods on a small break Loss of
Coolant Accident (LOCA) transient on the experimental facility LSTF and are now
supporting the Best-Estimate Methodologies for Uncertainty and Sensitivity
Evaluation (BEMUSE) program which consists into performing an uncertainty and
sensitivity analysis for a large break LOCA transient on the integral test
facility LOFT and a nuclear power plant. Nine out of ten participants of the
BEMUSE program use probabilistic methods with a large of common
characteristics. In particular, for the uncertainty propagation, it appears
that all participants
use a random sampling method (LHS or SRS) and for evaluating the
uncertainty margins a majority of participants intend to use order statistics
results such as Wilks’ formula.
After a quick review of the principle of
probabilistic methodologie and Monte-Carlo
simulations, we will explain the benefits and drawbacks of LHS and SRS sampling
techniques. A special focus will be given on the use of order statistics both
to limit the number of code calculations and to derive uncertainty margins
directly from code results without any additional hypothesis as for example fit
tests or response surfaces techniques. Moreover order statistics allow to measure the quality of uncertainty margins and consequently
provide a way to know the sample size effect on the evaluated safety
margins.
However, although Monte-Carlo methods provide
extremely flexible and powerful techniques for solving many of the uncertainty
propagation problems encountered in nuclear safety analysis, Monte-Carlo
methods present two major drawbacks. Like most methods based on probability theory , Monte-Carlo methods need a lot of knowledge. Indeed
to determine the probability law associated to each uncertain parameter, it is
necessary to have collected a considerable amount of data or to make
assumptions in the place of such empirical information. Moreover, to perform a
Monte-Carlo simulation, it is also required to provide information about all
the possible dependencies between the uncertain parameters. Unfortunately, in
practice, such information is rarely fully available and the impact of the
assumptions made to mitigate this lack of knowledge can deteriorate the
relevance of the decision-making. Thus, in a second part of this lecture, we
will present from a simple example, recent advances in Dempster-Shafer
theory which allow to overcome the robustness problem
of the uncertainty assessment due to the choice of the marginal probabilities
and their correlations to model the uncertainties in standard Monte-Carlo
simulations.
Topic
6 : Space
Nuclear Systems
6.1 History and Motivations for Spatial Propulsion
Reactors
The
space nuclear power and propulsion program in the Unites States was motivated
by the need to develop Intercontinental Ballistic Missile in early 1950’s. The
nuclear rocket engine development program started in 1955 with the initiation
of the ROVER project. The first step in the ROVER program was the KIWI project
that included the development and testing of 8 non-flyable ultrahigh
temperature nuclear test reactors during 1955-1964. The KIWI project was
precursor to the PHOEBUS carbon-based fuel reactor project that resulted in
ground testing of three high power reactors during 1965-1968 with the last
reactor operated at 4,100 MW. During the same time period a parallel program
was pursued to develop a nuclear thermal rocket based on cermet
fuel technology. The third component of the ROVER program was the Nuclear
Engine for Rocket Vehicle Applications (NERVA) that was initiated in 1961 with
the primary goal of designing the first generation of nuclear rocket engine
based on the KIWI project experience. The fourth component of the ROVER program
was the Reactor In-Flight Test (RIFT) project that was intended to design,
fabricate, and flight test a NERVA powered upper stage engine for the
Saturn-class lunch vehicle. During the ROVER program era, the Unites States
ventured in a comprehensive space nuclear program that included design and
testing of several compact reactors and space suitable power conversion
systems, and the development of a few light weight heat rejection systems.
Contrary to its sister ROVER program, the space nuclear power program resulted
in the first ever deployment and in-space operation of the nuclear powered
SNAP-10A in 1965.
The
6.2
Main Challenges
and Technical Issues related to Space
Space nuclear power and propulsion systems are operated at very high
power densities and temperatures. Other unique features of space nuclear power
systems include compactness, light weight, and tight coupling with power
conversion to electricity or thrust. Meeting all design requirements for space
nuclear systems presents technical challenges in areas of fuel and materials,
nuclear and thermal-fluid design, safety and reliability, and power conversion.
In general, high power space nuclear systems are divided in two
categories: Nuclear Thermal Propulsion (NTP) and Nuclear Electric Propulsion
(NEP). The propellant in NTP systems is hydrogen that is heated by reactor core
to temperatures as high as 3300K. Uranium carbides and tungsten alloy based cermet are the primary fuel materials of choice for NTP
system. In particular, uranium-refractory bi- and tri-carbides such as (U,Zr)C and (U,Zr,Nb)C,
and cermets such as UO2-W/Re, UN-W/Re, and (U,Zr)CN-W/Re
are considered the most promising fuel materials for high performance NTP
systems. The key performance indicator for NTP systems is fuel stability and
performance in hot hydrogen environment. Material requirement for NEP systems
varies with the design of power sources and electric thrusters. Multimegawatt NEP operations require using higher
temperature power cycles to maximize the specific power (KW/Kg). High
temperature power conversion using magnetohydrodynamic
(MHD), thermionics, alkali metal Rankine,
or Brayton gas turbine require special materials such
as single crystal tungsten, molybdenum, and chromium alloys. Multimegawatt power systems also require lightweight
radiator materials for heat rejection at elevated temperatures (T> 800K).
Space
nuclear reactors most distinguishing nuclear design features include the use of
reflector and solid moderator in thermal systems, and highly enriched uranium.
Main challenges in reactor physics calculation are primarily attributed to the
generation of high temperature neutron cross sections. The safety issues are
dominated by the potential for criticality accident initiated by water
submersion or impact with the ground.
Technical Visit
at the Forschungszentrum Karlsruhe GmbH:
innovative cooling technologies”
Special
Event
Seminar
“The new FRM-II Reactor
in
Lecturer: Prof Dr K. Böning (Technische
Universität München)