Graphical Analysis

Sharper graphical analysis has as examples the use of images colour modulated to show the variation of curvatures, or of nests of cross-sections (contours) which skilled loftsmen can use to assess the quality of a surface.

From: Handbook of Computer Aided Geometric Design , 2002

Mahalanobis, Prasanta Chandra

C. Radhakrishna Rao , in Encyclopedia of Social Measurement, 2005

Fractile Graphical Analysis

Fractile graphical analysis is an important generalization of the method and use of concentration (or Lorenz) curves. A Lorenz curve for wealth in a population tells, for example, that the least wealthy 50% of the population owns 10% of the wealth. (If wealth were equally distributed, the Lorenz curve would be a straight line.) The comparison of Lorenz curves for two or more populations is a graphical way to compare their distributions of wealth, income, numbers of acres owned, frequency of use of library books, and so on.

One of Mahalanobis' contributions in this domain was to stress the extension of the Lorenz curve idea to two variables. Thus, it is possible to consider, for example, both wealth and consumption for families, and to draw a curve from which it may be read that the least wealthy 50% of the families consume 27% of total consumption, or a certain quantity per family on the average. Or, treating the variables in the other direction, it might be found that the 20% least-consuming families account for 15% of the wealth, or a certain value per family. (The numbers in these examples are hypothetical and only for illustration.) Such bivariate generalized Lorenz curves can, of course, also be usefully compared across populations.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B0123693985002231

PRECISE ULTRAHIGH-PRESSURE EXPERIMENTS

C.E. Ragan , ... E.E. Robinson , in Shock Waves in Condensed Matter 1983, 1984

4 IMPEDANCE-MATCHING RESULTS

A graphical analysis based on the impedance-matching technique [9] and the measured interface shock velocities was used to determine a Hugoniot point for each lower layer sample relative to the molybdenum standard and for each upper layer sample relative to the adjacent lower level material. Each graph consisted of plots in the pressure — particle-velocity (P-u) plane of the experimentally determined Rayleigh line (P = ρoΔu) for the upper material and the calculated release isentrope (RI) or reflected shock (RS) Hugoniot for the lower standard material, whose initial state was defined by its measured interface shock velocity. Similar plots using shock velocities that differed by one standard deviation for both materials defined a region of uncertainty in this plane and provided error bars for the Hugoniot point. The location of the calculated Hugoniot of the upper material on this graph provided a comparison with the measured point and gave a direct check on the consistency of the theoretical treatments used for the two EOSs.

The derived P-u Hugoniot points are given for the indicated materials in the last two columns of Table I, with percent errors shown in parentheses. The calculated shock velocities based on theoretical EOSs are compared with the measured values in columns 5 and 6. These comparisons are given as percent differences between the calculated (cal) and experimental (exp) results [=100 ×(Δca1exp−1)].

For the case in which a material served as a standard without being treated as an upper level sample, the Hugoniot data in Table I were derived from its theoretical EOS using the measured shock velocity and its associated uncertainty. The values given for the molybdenum (two cases) were obtained in this manner using the SESAME EOS[2] for material no. 2981, and the results reflect the P-u errors corresponding to the uncertainty in Δ.

Details of the analysis technique are illustrated (as plots on expanded scales) for the copper sample and for the iron/quartz pair in Figures 3 and 4, respectively. In Figure 3, the molybdenum RI (Δ = 30.60 km/s) intersects the copper Rayleigh line (Δ = 32.10 km/s) to define a Hugoniot point (circled) for copper. The region bounded by the long-dashed curves shows the uncertainty for this point. Calculated Hugoniots based on three different EOSs are also shown in the figure as heavy curves. The Hugoniot based on the original SESAME EOS (dashed, material no. 3330) barely passes through the error box. The other Hugoniots, shown as solid (material no. 3332) and as dot-dashed (material no. 3331) curves are based on recent improved theoretical treatments[3,4] and give much better agreement with experiment. The large dot indicates the predicted point for SESAME material no. 3332.

Figure 3. Illustration of the impedance-matching analysis technique for the copper sample.

Figure 4. Illustration of the analysis technique for the iron(a) and quartz(b) samples.

Figures 4 shows a similar comparison for the iron and the quartz samples. In (a), the calculated Hugoniot for iron (dashed) based on the original SESAME EOS (material no. 2140) lies outside the region of uncertainty. The heavy solid curve shows the calculated Hugoniot based on an improved theoretical treatment[5] and is in very good agreement with the experimental point (circled). The derived Hugoniot point for quartz based on the improved iron EOS is circled in (b). The SESAME based Hugoniot (heavy solid curve) for quartz (material no. 7380) is in good agreement with the experiment and results in the predicted point indicated by the large dot. A similar analysis based on the original SESAME EOS for iron (labeled old) results in an experimental point at a considerably higher pressure (˜2.9 TPa), and the region of uncertainty (not shown) around this point barely overlaps the calculated quartz Hugoniot.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B978044486904350018X

Relativistic Photography

Sadri Hassani , in Special Relativity, 2017

7.7.2 Moving Sphere

Now is the time to look at the full-fledged case of a moving sphere not necessarily placed at the origin. I'll presently examine the image of the sphere centered at ( x c , 0 , 0 ) in camera C . Substituting (7.56) in (7.7) you should be able to show that

(7.67) x = γ ( x c + a sin θ cos φ β x c 2 + a 2 + b 2 + 2 a ( x c sin θ cos φ b cos θ ) ) y = a sin θ sin φ z = a cos θ .

Differentiating these with respect to θ and φ, you can find θ and φ and show that (assuming θ 0 , π ) the component of θ × φ is given by

(7.68) ( θ × φ ) x a 2 sin 2 θ = cos φ ( θ × φ ) y a 2 sin 2 θ = γ sin φ ( 1 β x c x c 2 + a 2 + b 2 + 2 a ( x c sin θ cos φ b cos θ ) ) ( θ × φ ) z a 2 sin θ = γ ( cos θ β ( x c cos θ + b sin θ cos φ ) x c 2 + a 2 + b 2 + 2 a ( x c sin θ cos φ b cos θ ) ) .

If you substitute these components as well as the coordinates of (7.67) in Equation (C.17) and persevere long enough, you will be able to obtain

(7.69) γ ( x c sin θ cos φ b cos θ + a ) × ( 1 β ( x c + a sin θ cos φ ) x c 2 + a 2 + b 2 + 2 a ( x c sin θ cos φ b cos θ ) ) = 0 .

In Problem 7.36 I have asked you to show that the second parentheses in (7.69) cannot be zero. Therefore, as I promised earlier, you obtain the same relation between θ and φ as in the stationary case (7.57), and Equations (7.58) and (7.59) describe the parametric equation of the image curve in C .

If you substitute (7.59) in (7.58) and plot the resulting parametric equation, you get an ellipse regardless of the values of x c , a, b, and β. Therefore, you should suspect that a reparametrization could simplify this parametric equation and make the properties of the ellipse more transparent. Up to here, I have pulled all the reparametrizations out of a hat! And you may have wondered how I got reparametrizations like (7.61) and (7.65). Sometimes, trial and error is the only way. Other times, experimenting with graphs and a little familiarity with geometry can be very helpful. Now I let you in on a neat secret!

All the ellipses you get have their major axes along the u -axis. This should tell you that if you can find the two extreme values of u , you can determine both the coordinates of the center and the length of the major axis of the ellipse. In fact, if you denote the coordinates of the center by ( u c , 0 ) and the semi-major axis by A, then

u c = u max + u min 2 , A = | u max u min | 2 ,

where u max and u min are the points where the ellipse crosses the u -axis on the right and left, respectively. The absolute value sign is necessary because A is a positive quantity. Now, the ellipse crosses the u -axis when v = 0 , which, by the second equation in (7.58), corresponds to φ = 0 , π . Hence,

(7.70) u c = u | φ = 0 + u | φ = π 2 = b β 2 γ x c x c 2 + b 2 a 2 b β γ ( x c 2 + b 2 ) ( x c 2 + b 2 a 2 β x c ) ( x c 2 + b 2 ) A = | u | φ = 0 u | φ = π | 2 = a γ ( b 2 + x c 2 / γ 2 ) | x c 2 + b 2 a 2 β x c | ( x c 2 + b 2 ) .

If the image curve is indeed an ellipse, we should be able to right it as

(7.71) u u c = A cos ϕ v = B sin ϕ ,

where cos ϕ and sin ϕ are functions of φ and B is independent of it. Substituting (7.70) and the first equations of (7.58) and (7.59) in the first equation of (7.71), you can calculate cos ϕ . As the calculation is cumbersome, a computer algebra program is very helpful. The result turns out to be the same as (7.61). Now evaluate sin ϕ and substitute it in the second equation of (7.71) to find B. You should show that

Image of a moving sphere centered at ( x c , 0 , 0 ) is an ellipse elongated along the direction of motion!

(7.72) B = a b 2 + x c 2 / γ 2 | x c 2 + b 2 a 2 β x c | x c 2 + b 2 = x c 2 + b 2 x c 2 + γ 2 b 2 A .

In the case of approach ( β x c < 0 ), the value of B is smaller than the radius in (7.63) of the image of the stationary sphere (show this!). This is not due to any motional shrinkage because B is perpendicular to the direction of motion. It is due to the fact that the light rays captured by the camera are coming from a location that is farther away than when the sphere is stationary.

Remark 7.7.2

The preceding discussion illustrates a beautiful example of the connection between numerical and graphical analysis on the one hand and analytical formulation on the other. The latter is, of course, what we are after. The discovery of the fruitfulness of this connection is due to Archimedes, who experimented with vessels and liquids to prove some fundamental theorems of geometry. In the example discussed here, the graph helped me to discover that the curve was an ellipse, and having familiarity with the properties of an ellipse helped me find the center, semi-major, and semi-minor axes of the ellipse in terms of the relevant parameters of the problem.  ■

Learning from Archimedes!

Some interesting features of the image of a moving sphere emerge by examining Equations (7.70) and (7.72). Firstly, when the sphere is not moving, β = 0 , γ = 1 , and these equations yield

A rest = B rest = a x c 2 + b 2 a 2

as in Equation (7.63).

Secondly, at large distances from the camera, i.e., when x c > > b , the image in C approaches a circle of radius

r = lim | x c | B = lim | x c | A = a | x c | 1 + ( x c / | x c | ) β 1 ( x c / | x c | ) β ,

while the radius of the image in C approaches

r = lim | x c | B rest = lim | x c | A rest = a | x c | = r | β = 0 .

Therefore,

(7.73) r = 1 + ( x c / | x c | ) β 1 ( x c / | x c | ) β r ,

showing that the image in C is smaller on approach (when x c and β have opposite signs) and larger on recession (when x c and β have the same sign). This is consistent with the small angle approximation formula (7.11).

There is an intuitive physical explanation for the fact that the sphere appears as a circle when far away. The elongation of the photograph of any object occurs when there is a sufficiently large separation between the emission of light from the two extremes (the trailing and leading points) of the object or, equivalently, when there is a sufficiently large difference between the distances from the pinhole to the two extremes of the object. When the object is far away, this difference is small, and the elongation is not pronounced.

Thirdly, if the cameras take the picture of the sphere when it is at the origin,

A = γ B = γ a b 2 a 2 ,

which agrees with Equation (7.54).

Finally, we can investigate conditions under which the image of the moving sphere is a circle. This happens if and only if A = B in (7.71), which by (7.72) happens if and only if x c 2 + b 2 = x c 2 + γ 2 b 2 , or b 2 ( 1 γ 2 ) = 0 . The γ = 1 solution corresponds to the stationary sphere, already discussed in Section 7.7.1. For the moving sphere, the only way to capture a circular image is to have b = 0 , i.e., to move the camera to the origin, so that the sphere is moving directly toward or away from C . Then the images in C and C are circles of radii

Only a directly approaching or receding sphere has a circular image.

r = a γ | x c 2 a 2 β x c | and r = a x c 2 a 2 ,

with

(7.74) r = x c 2 a 2 γ | x c 2 a 2 β x c | r .

Therefore, as already mentioned in Note 7.6.4, Penrose's argument that the image of a sphere is a circle applies only to the case where the sphere is either approaching directly toward or receding directly away from the camera.

From (7.74), it is quite obvious that r < r on approach ( β x c < 0 ). On recession ( β x c > 0 ), we expect r to be larger than r because of the limiting case of Equation (7.11). However, this may not be the case for all values of β. In fact, the denominator of Equation (7.74) can be made arbitrarily small! This is the situation against which I warned in Remarks 7.7.1 and C.1.1, because b = 0 here. Thus, r / r starts at 1, increases to infinity at β = x c 2 a 2 / x c , decreases to 1 at γ = 1 + 2 ( x c / a ) 2 , and continues to decrease for larger values of γ. Figure 7.9 shows this behavior for x c = + 3 . Problem 7.14 looks at the behavior of a cube when directly approaching or receding from the camera. The behavior of a sphere is a little different from that of a cube because in the latter case, the camera collects photons coming from the single side facing it. Thus, the boundary from which image-forming photons emanate is a fixed square. In the case of the sphere, on the other hand, less and less of the surface gets photographed as the sphere approaches the camera.

Figure 7.9

Figure 7.9. The behavior of the ratio r /r as a function of β for x c   =   +3.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128104118000079

Describing Data Sets

Sheldon M. Ross , in Introductory Statistics (Fourth Edition), 2017

2.6 Some Historical Comments

Probably the first recorded instance of statistical graphics—that is, the representation of data by tables or graphs—was Sir Edmund Halley's graphical analysis of barometric pressure as a function of altitude, published in 1686. Using the rectangular coordinate system introduced by the French scientist René Descartes in his study of analytic geometry, Halley plotted a scatter diagram and was then able to fit a curve to the plotted data.

In spite of Halley's demonstrated success with graphical plotting, almost all the applied scientists until the latter part of the 18th century emphasized tables rather than graphs in presenting their data. Indeed, it was not until 1786, when William Playfair invented the bar graph to represent a frequency table, that graphs began to be regularly employed. In 1801 Playfair invented the pie chart and a short time later originated the use of histograms to display data.

The use of graphs to represent continuous data—that is, data in which all the values are distinct—did not regularly appear until the 1830s. In 1833 the Frenchman A. M. Guerry applied the bar chart form to continuous crime data, by first breaking up the data into classes, to produce a histogram. Systematic development of the histogram was carried out by the Belgian statistician and social scientist Adolphe Quetelet about 1846. Quetelet and his students demonstrated the usefulness of graphical analysis in their development of the social sciences. In doing so, Quetelet popularized the practice, widely followed today, of initiating a research study by first gathering and presenting numerical data. Indeed, along with the additional steps of summarizing the data and then utilizing the methods of statistical inference to draw conclusions, this has become the accepted paradigm for research in all fields connected with the social sciences. It has also become an important technique in other fields, such as medical research (the testing of new drugs and therapies), as well as in such traditionally nonnumerical fields as literature (in deciding authorship) and history (particularly as developed by the French historian Fernand Braudel).

The term histogram was first used by Karl Pearson in his 1895 lectures on statistical graphics. The stem-and-leaf plot, which is a variant of the histogram, was introduced by the U.S. statistician John Tukey in 1970. In the words of Tukey, "Whereas a histogram uses a nonquantitative mark to indicate a data value, clearly the best type of mark is a digit."

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128043172000023

Rotation Algorithms: From Beginning to End

R.I. Jennrich , in Handbook of Latent Variable and Related Models, 2007

1 Introduction

Rotation algorithms began with the graphical methods of Thurstone (1947) for producing simple structure in factor analysis. Beginning with an initial reference structure he produced a sequence of simpler reference structures. Each was constructed from a graphical analysis of plots produced from a current reference structure. These rather labor intensive methods actually worked quite well. Simple reference structures tend to correspond to simple loadings so simplifying reference structures may be viewed as an "indirect" method of producing simple loadings in the terminology of Harman (1976).

A number of factor analysts Carroll (1953), Neuhaus and Wrigley (1954), and Saunders (1953) independently proposed the first analytic rotation method. This was based on maximizing a criterion designed to measure the simplicity of a factor loading matrix. Their algorithm used a sequence of two factor rotations found in each case by analytically optimizing the criterion rather than by a graphical analysis of plots. The common criterion used by these authors is called the quartimax criterion. Unlike Thurstone's method, these methods required the factors to be orthogonal and because of this restriction often failed to produced results as nice as those obtained from graphical methods.

A number of alternative rotation criteria were proposed and optimized by sequences of analytic two factor rotations. Some of these, in particular, varimax (Kaiser, 1958), worked better than quartimax, but as with quartimax these were restricted to orthogonal rotation.

One difficulty with these early methods was that algorithms for each criterion were specific to that criterion. A new criterion required a new algorithm. The first step to remove this difficulty was a pairwise orthogonal rotation algorithm proposed by Jennrich (1970) for optimizing arbitrary quartic criteria which included most of the orthogonal rotation criteria in use at that time. The only criterion specific code required was a formula to define the criterion.

Carroll (1953) was the first to propose an analytic oblique method. He used a criterion appropriate for oblique rotation called the quartimin criterion and applied it to the reference structure. He showed how to make a sequence of one factor at a time rotations to optimize the criterion. Two problems with this approach were that it was restricted to the quartimin criterion and some modest generalizations of it and like Thurstone's method it was indirect.

Jennrich and Sampson (1966) were the first to provide a direct analytic method for oblique rotation. They showed how to optimize the quartimin criterion applied directly to the factor loadings using a sequence of one factor rotations. Unlike Carroll's, their method was direct and generalized easily to other rotation criteria.

Today there are many nonquartic criteria of interest for both orthogonal and oblique rotation. A breakthrough came when Browne and Cudeck (see Section 8.3 below) proposed a very simple approach to optimizing arbitrary criteria using pairwise rotation and a line search algorithm. This can be used for either orthogonal or oblique rotation. The only criterion specific code required is a formula to define the criterion.

Along the same line Jennrich (2001, 2002) proposed orthogonal and oblique gradient projection (GP) algorithms for optimizing arbitrary criteria. These methods used gradients to optimize the criteria directly without requiring pairwise rotations. They require a formula for the criterion and its gradient. When used with numerical gradients, they require only a formula for the criterion. With analytic gradients they can be considerably faster than Browne and Cudeck's pairwise line search method.

What follows provides details for the overview just given.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780444520449500069

Statistical Methods for Physical Science

William Q. Meeker , Luis A. Escobar , in Methods in Experimental Physics, 1994

8.1.3 Basic Ideas of Modeling and Inference with the Likelihood Function

The practice of statistical modeling is an iterative process of fitting successive models in search of a model that provides an adequate description without being unnecessarily complicated. Application of ML methods generally starts with a set of data and a tentative statistical model for the data. The tentative model is often suggested by the initial graphical analysis ( Chapter 7) or previous experience with similar data or other "expert knowledge."

We can consider the likelihood function to be the probability of the observed data, written as a function of the model's parameters. For a set of n independent observations the likelihood function can be written as the following joint probability:

(8.2) L ( θ ) = L ( θ ; DATA ) = i = l n L i ( θ ; DATA i )

where L i ( θ ) = L j ( θ ; DATA i ) , the interval probability for the ith case, is computed as shown in Section 8.2. The dependency of the likelihood on the data will be understood and is usually suppressed in our notation. For a given set of data, values of θ for which L(θ) is relatively large are more plausible than values of θ for which the probability of the data is relatively small. There may or may not be a unique value of θ that maximizes L(θ). Regions in the space of$ with large L(θ) can be used to define confidence regions for θ. We can also use ML estimation to make inferences on function of θ. In the rest of this chapter, we will show how to make these concepts operational, and we provide examples for illustration.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/S0076695X08602586

SCINTILLATION SPECTRA ANALYSIS

R. VAN LIESHOUT , ... R.K. GIRGIS , in Alpha-, Beta- and Gamma-Ray Spectroscopy, 1968

§ 5 Analysis of a spectrum by the peeling method

All accurate methods of spectral analysis should start with a preliminary run to determine appropriate conditions, like geometry, use of absorbers, etc., which may differ for different energy ranges of the spectrum. This will often suffice to make a selection of the most useful standard nuclides, and decide about the intensity of the source under study to be used with the calibration standards.

A programme that has been found to give rather accurate results in most cases consists of a measurement of the background and of some of the standard nuclides; this is followed by a series of measurements of the sample under study, possibly interspersed by those of standards and finally a complete set of standard nuclides is measured followed by a second background determination.

Graphical analysis of the desired spectrum starts by cleaning it of all undesired effects: background, bremsstrahlung and annihilation in flight with possibly the spurious coincident backscattering peak at 680 keV. The first offers no difficulty, the others can only be subtracted if additional knowledge about the β-ray emission exists. If the composition of the β-ray spectrum (negatons, positons, electron capture) is known, the γ-continuum can be constructed and subtracted. In many cases, however, intensity normalisation will still remain necessary and this can often only be accomplished in a satisfactory and consistent manner after a partial preliminary analysis of the discrete γ-rays has been performed. In this respect the determination of the bremsstrahlung and annihilation in flight contributions becomes part of the method of (graphical) analysis.

Next would come the elimination of summing effects. Random coincidences offer no difficulty in principle, if the values of the time constants involved are known. Summing effects due to cascading γ-rays are only calculable if enough information is known about the level scheme. The major contributions can be recognized through a rather simple coincidence experiment e.g. a summing study with a well-type crystal.

Once a peak in the spectrum has been recognized as completely or partially due to a true coincidence effect between two identified γ-rays, the pulse height distribution can be determined and subtracted after having been normalised in intensity at the sumphotopeak.

From the above it is clear that additional information, not gleaned easily from the direct γ-ray measurement, is very helpful in the analysis and should be used to the fullest extent.

In the peeling method the pulse height position of the highest energy peak in the spectrum is determined, its character as a direct γ-ray or a summing effect is ascertained and the corresponding pulse height distribution is subtracted from the total spectrum. The shape needed is obtained from the library of standard shapes, if necessary through interpolation. Normalisation of intensity and small adjustments of pulse heights can be easily accomplished by superimposing a sheet of transparent doubly logarithmic paper with the constructed shape with one, on which the spectrum under study is recorded. This procedure is repeated for every peak resolved from the total pulse height distribution, going to lower energies in every step. At every stage the residuals in the higher energy region and the general appearance of the partially stripped spectrum are inspected. If these are unsatisfactory it is necessary to go back to an earlier stage in the analysis and make a readjustment in the fitting procedure or, may be, reinterpret a peak as due to summing effects or reevaluate background or continuous distributions.

For well resolved full-energy peaks, and not too important disturbing effects the procedure works very satisfactorily. The unavoidable accumulation of statistical errors is then not very objectionable.

A more difficult situation arises if there are two or more full-energy peaks, which are only partially resolved. It pays to subtract the stronger peak first and then go back to the weaker one, even if it is of higher energy. It very often still remains possible to find a certain range of energies and intensities for the components of the multiple peak which fit the total distribution equally well. In such cases the sum of the intensities of the various components can be determined rather precisely, but the determination of their ratio is not very accurate.

In such more complicated cases the graphical method is very flexible, where one can hunt back and forth to obtain the most satisfactory fit. Flaws in the procedure are often recognised at an early stage and can be corrected before they propagate. All this makes it a powerful method in the hands of a trained person. Subjectivity cannot be completely avoided, especially in the treatment of complicated spectra with weak components. Subjective trends can be recognized through a training program, in which sources, mixed of known quantities of standard nuclides, are first analysed using the individual standards themselves and then again with a catalogue of shapes not containing them.

The graphical peeling method has shown its value in countless studies. In many cases the results were substantiated by later studies with experimental devices like magnetic spectrometers, which have higher resolving power, or by more elaborate coincidence work.

The determination of the pulse height position of a full-energy peak in a routine analysis with the peeling method can be accomplished with an accuracy of approximately 1%, except for composite peaks. The reproducibility between different runs is also about 1%. In determining the number of counts associated with each individual γ-ray, there is some leeway in normalizing the top part of a full-energy peak, mostly depending upon how much was subtracted due to higher energy γ-rays, and upon the cumulation of statistical errors. The largest uncertainties often occur in the region of the backscattering peak, which is already superimposed on a Compton continuum. Since the pulse height distribution in this region is rather sensitive to scattering effects, weak peaks can sometimes be missed, or spurious components can be generated by the procedure. Uncertainties due to the inaccuracies in the interpolation between standard shapes, lack of exact reproducibility between different runs, etc., lead to an error that depends strongly upon the specific case. These uncertainties still have to be compounded with those arising from conversion of pulse height to energy and of counting rate to intensities. In favourable cases intensities can be determined with an accuracy of 3% to 5%, but a more realistic average value is closer to 10%.

Once a satisfactory peeling method has been developed and tested, it is no longer necessary to use only monochromatic sources as standards. Nuclides emitting two, or sometimes three, γ-rays of rather different energies can be analyzed and the results checked against data obtained with more accurate methods. Consistency of the results then makes them acceptable as additional standards for shape determinations.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780720400830500183

Interrogation of Subdivision Surfaces

Malcolm Sabin , in Handbook of Computer Aided Geometric Design, 2002

13.1 SUBDIVISION SURFACE INTERROGATIONS

Interrogation is the process of determining from a surface various properties which are required for analysis of various aspects of the product which the surface helps to describe, and for creating data to assist in the manufacturing process. The survey paper [13] covers a range of computational approaches.

Display is the first example which comes to mind, being useful both for aesthetic analysis and for marketing. Simple images can be generated by faceting the surface and sending the facets to a Z-buffer display: highly realistic renderings need ray-casting, the calculation of the points where rays from the eye first meet the surface.

Sharper graphical analysis has as examples the use of images colour modulated to show the variation of curvatures, or of nests of cross-sections (contours) which skilled loftsmen can use to assess the quality of a surface.

For analysis of strength and stiffness of the product we require to be able to generate appropriate grids across the surface, varying in local density with the expected spatial frequencies of the displacement under load, and being able to find nearest points on the surface to given points (estimated by the mesh generation process) is a key tool.

For manufacture, cross-sections again give the shapes of templates or of internal structural members, and intersections of surfaces are required to determine trimming of skin panels. Offset surfaces are traditionally required for the determination of tool-centre paths for numerically controlled machining, although the recent trend in NC software towards the use of dense triangulations as the master for tool-path determination may reduce the need for that a little.

Such requirements typically take much more CPU than the first setting up of a surface, and so it is important that they be efficiently implemented. In the solid modelling context, it is also important that they be robust in their operation.

The literature is remarkably sparse in describing these issues, with a few tens of papers while there are hundreds dealing with surface descriptions and thousands dealing with the mathematics of splines.

In Geisow's seminal work[7] (more accessibly available in a summary by Pratt and Geisow[11]), various approaches to interrogation were described in some detail, the front-runner at the time being Newton iteration[17], used directly for nilvariate interrogations such as nearest-point, and embedded in a marching procedure for determining univariate interrogations such as cross-sections and intersections. If a piece of surface were known to have only one open piece of intersection across it, the process of approximation by successive halving of the intersection curve was also robust.

A little later, Sederberg[14] compared different techniques for finding intersection points of 2D curves. At that time subdivision came out poorly: robust, but relatively slow.

There are reasons to believe that this judgement needs to be reconsidered. The computers on which we now base CAD/CAM systems are much faster than those available then, and, more significantly, the amounts of real memory are orders of magnitude larger. We are now approaching the era of the Gigabyte PC, compared with the tens of Kilobytes which were available when other methods were developed.

We can now afford the memory to subdivide a surface when it is loaded, so that the subsequent interrogations become much cheaper. There are quantitative arguments, based on the relative speed of disc access and processor arithmetic, which can determine the level of detail which should be retained on disk and other arguments can determine the optimum level of detail to be precomputed and retained in memory.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780444511041500149

Handbook of the History of Logic

Bert Mosselmans , Ard Van Moer , in Handbook of the History of Logic, 2008

12 Theory of Probability, Statistics and Econometrics

Given the problems that we encountered above, the role and importance of Jevons' system of logic and philosophy of mathematics seems to be limited. It seems to be limited to a pedagogical aspect: Jevons' writings on logic, such as his Elementary Lessons in Logic, were widely used as textbooks and saw numerous reprints, up to decades after his death. This appraisal would not, however, do justice to Jevons' most important achievement: the introduction of statistics and econometrics in the social sciences and the use of empirical data.

Stigler [1982, pp. 354–7] argues that statisticians in the first part of the 19th century were concerned with the collection of data, but not with analysis. The data suggested too many different causes, and the hope to establish a Newtonian social science using statistics faded away. Statistical journals published tables and numbers, but graphical representations and analysis remained absent. Jevons' interest in empirical economic work is probably derived from meteorology, another field in which he was active and for which he collected data and drew diagrams. In 1863 Jevons' use of empirical methods in economics resulted in a first practical survey: A Serious Fall in the Value of Gold. This survey studied the influence of Australian and Californian gold discoveries of 1851 on the value of gold. For this purpose he compared the prices since 1851 with an average price drawn from the previous fluctuation of 1844–50, in order to eliminate fluctuations of price due to varying demand, manias for permanent investment and inflation of credit. The investigation showed that prices did not fall to their old level after a revulsion, which indicates a permanent depreciation of gold after the gold discoveries. Prices rose between 1845–50 and 1860–62 by about ten per cent, which corresponds to a depreciation of gold of approximately 9 per cent [Jevons, 1884, pp. 30–59]. Stigler [1982, pp. 357–61] states that Jevons' methodology is remarkable and novel for his time. The survey computes, for 39 major and 79 minor commodities, the ratio of the average 1860–2 price to the average 1845–50 price. A diagram with a logarithmic scale reveals that 33 of the 39 major commodities and 51 of the 79 minor commodities encountered a rise in price. The 9 per cent gold depreciation is calculated using a geometric mean of the price changes. The use of the geometric mean prevents that large values receive disproportionate weights. In 1863 the choice of the geometric mean relies on intuition, and in later publications on inadequate explanations. One of these explanations is statistical, saying that multiplicative disturbances will be balanced off against each other using the geometric mean. There is however no empirical verification of this 'multiplicative disturbances' hypothesis. Stigler [1982, pp. 362–4] regards the absence of a probabilistic analysis and the measurement of the remaining uncertainty in the averages as an anomaly in Jevons' work. But it should nevertheless be seen as a milestone in the history of empirical economics, because his conceptual approach opened the ways for a quantification of uncertainty and for the development of statistics for the social sciences.

Aldrich [1987, pp. 233–8] denies that Jevons has no interest in probability. Quite the contrary is the his Principles of Science contains an elaborate discussion of probability. Jevons did not use the laws of probability to describe the behaviour of empirical entities, but rather as rules for the regulation of beliefs. Probability enters when complete knowledge is absent, and it is therefore a measure of ignorance. Aldrich [1987, pp. 238–51] argues that Jevons used probability in two main patterns of argument: in the determination whether events result from certain causes or are rather coincidences, and in the method of the least squares. The first approach entails the application of the 'inverse method' in induction: if many observations suggest regularity, then it becomes highly improbable that these result from mere coincidence. An application of this principle can be found in A Serious Fall, where Jevons concludes that a large majority of commodities taken into consideration show a rise of price, and therefore a rise in exchange value relative to gold. The 'inverse inductive method' leads to the conclusion that a depreciation of gold is much more probable than mere coincidences leading to the rise of prices. The second approach, the method of the least squares, appears when Jevons tries to adjudge weights to commodities (giving more weight to commodities that are less vulnerable to price fluctuations), and when he tries to fit empirical laws starting from an a priori reasoning about the form of the equation. These methods show at least some concern for probability and the theory of errors. But Jevons worked on the limits of his mathematical understanding, and many ideas that he foreshadowed were not developed until decades after his death. A Serious Fall is not so much remembered for its limited use of probability theory, but rather for its construction of index numbers. In his Principles of Science Jevons refers several times to Adolphe Quetelet. Elsewhere I elaborate on Quetelet's influence on Jevons' writings [Mosselmans, 2005].

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/S1874585708800155

Chaos

Joshua Socolar , in Encyclopedia of Physical Science and Technology (Third Edition), 2003

III.A.1 Period Doubling

For values of a slightly bigger than 3, empirical observations of the time sequences for this nonlinear dynamical system generated by using a hand calculator, a digital computer, or our "pencil computer" reveals that the long-time behavior approaches a periodic cycle of period 2, which alternates between two different values of x . Because of the large nonlinearity in the difference equation, this periodic behavior could not be deduced from any analytical arguments based on exact solutions or from perturbation theory. However, as typically occurs in the field of nonlinear dynamics, the empirical observations provide us with clues to new analytical procedures for describing and understanding the dynamics. Once again, the graphical analysis provides an easy way of understanding the origin of the period-2 cycle.

Consider a new map.

(7) x n + 2 = F 2 x n = F F x n = a 2 x n x n 2 a 3 x n 2 2 x n 3 + x n 4

constructed by composing the logistic map with itself. The graph of the corresponding return map, which gives the values of x n every other iteration of the logistic map, is displayed in Fig. 5. If we use the same methods of analysis as we applied to Eq. (6), we find that there can be at most four fixed points that correspond to the intersection of the graph of the quartic return map with the 45° line. Because the fixed points of Eq. (4) are values of x that return every other iteration, these points must be members of the period-2 cycles of the original logistic map. However, since the period-1 fixed points of the logistic map at x  =   0 and x* are automatically period-2 points, two of the fixed points of Eq. (7) must be x  =   0, x*. When 1   <   a   <   3, these are the only two fixed points of Eq. (7), as shown in Fig. 5 for a  =   2.9. However, when a is increased above 3, two new fixed points of Eq. (7) appear, as shown in Fig. 5 for a  =   3.2, on either side of the fixed point at x  = x*, which has just become unstable.

FIGURE 5. The return maps are shown for the second iterate of the logistic map, F (2), defined by Eq. (7). The fixed points at the intersection of the 45° line and the map correspond to values of x that repeat every two periods. For a  =   2.9, the two intersections are just the period-1 fixed points at 0 and a*, which repeat every period and therefore every other period, as well. However, when a is increased to 3.2, the peaks and valleys of the return map become more pronounced and pass through the 45° line and two new fixed points appear. Both of the old, fixed points are now unstable because the absolute value of the slope of the return map is larger than 1, but the new points are stable, and they correspond to the two elements of the period-2 cycle displayed in Fig. 4. Moreover, because the portion of the return map contained in the dashed box resembles an inverted image of the original logistic map, one might expect that the same bifurcation process will be repeated for each of these period-2 points as a is increased further.

Therefore, when the stable period-1 point at x* becomes unstable, it gives birth to a pair of fixed points, x (1), x (2) of Eq. (7), which form the elements of the period-2 cycle found empirically for the logistic map. This process is called a pitchfork bifurcation. For values of a just above 3, these new fixed points are stable and the long-time dynamics of the second iterate of the logistic map, F (2), is attracted to one or the other of these fixed points. However, as a increases, the new fixed points move away from x*, the graphs of the return maps for Eq. (7) get steeper and steeper, and when d F 2 / d x x 1 , x 2 > 1 the period-2 cycle also becomes unstable. (A simple application of the chain rule of differential calculus shows that both periodic points destabilize at the same value of a, since F(x (1),(2))   = x {(2),(1) and (dF (2)/dx)(x (1))   =   (dF/dx)(x (2))(dF/dx)(x(1))   =   (dF (2)/dx)(x(1)).)

Once again, empirical observations of the long-time behavior of the iterates of the map reveal that when the period-2 cycle becomes unstable it gives birth to a stable period-4 cycle. Then, as a increases, the period-4 cycle becomes unstable and undergoes a pitchfork bifurcation to a period-16 cycle, then a period-32 cycle, and so on. Since the successive period-doubling bifurcations require smaller and smaller changes in the control parameter, this bifurcation sequence rapidly accumulates to a period cycle of infinite period at a   =   3.57….

This sequence of pitchfork bifurcations is clearly displayed in the bifurcation diagram shown in Fig. 6. This graph is generated by iterating the map for several hundred time steps for successive values of a. For each value of a, we plot only the last hundred values of x n to display the long-time behavior. For a  <   3, all of these points land close to the fixed point at a* for a  >   3, these points alternate between the two period-2 points, then between the four period-4 points, and so on.

FIGURE 6. A bifurcation diagram illustrates the variety of long-time behavior exhibited by the logistic map as the control parameter a is increased from 3.5 to 4.0. The sequences of period-doubling bifurcations from period-4 to period-8 to period-16 are clearly visible in addition to ranges of a in which the orbits appear to wander over continuous intervals and ranges of a in which periodic orbits, including odd periods, appear to emerge from the chaos.

The origin of each of these new periodic cycles can be qualitatively understood by applying the same analysis that we used to explain the birth of the period-2 cycle from period 1. For the period-4 cycle, we consider the second iterate of the period-2 map:

(8) x n + 4 = F 4 x n = F 2 F 2 x n = F { F [ F ( F ( x n ) ] }

In this case, the return map is described by a polynomial of degree 16 that can have as many as 16 fixed points that correspond to intersections of the 45° line with the graph of the return map. Two of these period-4 points correspond to the period-1 fixed points at 0 and x*, and for a  >   3, two correspond to the period-2 points at x (1) and x (2). The remaining 12 period-4 points can form three different period-4 cycles that appear for different values of a. Figure 7 shows a graph of F (4)(x n ) for a  =   3.2, where the period-2 cycle is still stable, and for a  =   3.5, where the unstable period-2 cycle has bifurcated into a period-4 cycle. (The other two period-4 cycles are only briefly stable for other values of a  > a .)

FIGURE 7. The appearance of the period-4 cycle as a is increased from 3.2 to 3.5 is illustrated by these graphs of the return maps for the fourth iterate of the logistic map, F (4). For a  =   3.2, there are only four period-4 fixed points that correspond to the two unstable period-1 points and the two stable period-2 points. However, when a is increased to 3.5, the same process that led to the birth of the period-2 fixed points is repeated again in miniature. Moreover, the similarity of the portion of the map near x n   =   0.5 to the original map indicates how this same bifurcation process occurs again as a is increased.

We could repeat the same arguments to describe the origin of period 8; however, now the graph of the return map of the corresponding polynomial of degree 32 would begin to tax the abilities of our graphics display terminal as well as our eyes. Fortunately, the "slaving" of the stability properties of each periodic point via the chain-rule argument (described previously for the period-2 cycle) means that we only have to focus on the behavior of the successive iterates of the map in the vicinity of the periodic point closest to x  =   0.5. In fact, a close examination of Figs. 4,5 and 7 reveals that the bifurcation process for each F (n) is simply a miniature replica of the original period-doubling bifurcation from the period-1 cycle to the period-2 cycle. In each case, the return map is locally described by a parabolic curve (although it is not exactly a parabola beyond the first iteration and the curve is flipped over for every other F (N).

Because each successive period-doubling bifurcation is described by the fixed points of a return map x n+N   = F (N)(x n ) with ever greater oscillations on the unit interval, the amount the parameter a must increase before the next bifurcation decreases rapidly, as shown in the bifurcation diagram in Fig. 6. The differences in the changes in the control parameter for each succeeding bifurcation, a n+1  a n , decreases at a geometric rate that is found to rapidly converge to a value of:

(9) δ = a n a n 1 a n + 1 a n = 4.6692016

In addition, the maximum separation of the stable daughter cycles of each pitchfork bifurcation also decreases rapidly, as shown in Fig. 6, by a geometric factor that rapidly converges to:

(10) α = 2.502907875

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B0122274105000946