Dust flux, Vostok ice core

Dust flux, Vostok ice core
Two dimensional phase space reconstruction of dust flux from the Vostok core over the period 186-4 ka using the time derivative method. Dust flux on the x-axis, rate of change is on the y-axis. From Gipp (2001).

Wednesday, September 29, 2010

Alien Justice

Ok, I admit I don't quite know what to make of this.

Supposedly the aliens are opposed to nuclear weapons.

We'll see if the links come down.

It reminds me of a story reported by Pravda when I was in the Okanagan in 1989, where they reported that aliens had landed in Russia, greeting the locals.

Saturday, September 25, 2010

Decline of variability when approaching bifurcation point: an alternative view of charting

One of the significant problems facing climate scientists is how to predict when the system is about to undergo a major change in organization. This point is important because a precise prediction of what is going to happen in response to continued alteration of the atmosphere is unlikely. What we may be able to do is look for signals of an impending global reorganization in the hopes that we will recognize same before the change suddenly occurs.

The basis of this discussion is that the climate system acts to resist external forcings, but if the forcing is strong enough, the system then evolves very rapidly towards some new equilibrium. We may or may not be able to predict what the new equilibrium will be.

It is generally viewed that a change in climate equilibrium would be bad for us as our entire system of global agriculture is geared towards our present equilibrium, which has lasted for about ten thousand years (which, come to think of it, is about as long as we have been practicing agriculture). We haven't put much effort into planning for agriculture in a new equilibrium state.

Once again, understanding the past is the key to predicting the future.

A number of papers have been presented recently, in which the authors have looked at climate signals prior to major climate shifts in the paleoclimate record. And they have noticed that prior to a major change in climatic regime, the short-term variability declines dramatically.



Carbonate content of marine sediments during a "climate tipping" event from greenhouse conditions to icehouse conditions ca. 34 million years ago. From Thompson and Sieber (in press). The lower figure tracks the rise of an autoregressive parameter--essentially an inverse measurement of variability.

The reason for this decline in variability is unclear, however it appears in both models and in observed time series. Given the detailed observations made of present-day climate, it is theoretically possible to anticipate when a sudden climate change is about to occur.

There are counter-arguments, however. Climate measurements are normally easier to make than ecological measurements.

What does this mean for the stock market?

Some well-known charting patterns show the same behaviour; most notably the triangle pattern.


This does seem to be at odds with some conclusions I've drawn earlier. I may have to reconsider.

Consider it evidence that dynamic systems are hard.






Reference

Thompson, J. M. T. and Sieber, J., in press. Predicting climate tipping as a noisy bifurcation: A review. International Journal of Bifurcation and Chaos.

Thursday, September 23, 2010

Reconstruction of the gold and silver-price state space: evidence for manipulation?

Adrian Douglas in an article here has argued the nature of the variation in price between gold and silver (as compared to that between gold and copper) supports the contention that the prices of gold and silver have been manipulated. While we here at the World Complex do not doubt the manipulation of the gold and silver price, we take exception with Mr. Douglas' latest argument.

The basis of Mr. Douglas argument is to present us with two figures: one a scatterplot of gold price vs copper price; and the other a scatterplot of gold price vs silver price.

Hopefully he won't mind if I reproduce them here so we can discuss their significance.





Copper-price vs. gold price scatterplot.


Why this is a two-dimensional phase space for the gold and copper price. 

As we have seen previously, phase spaces can be reconstructed from scatterplots of independent variables.  The orthogonality of our axes may be called into question when the two time series are strongly correlated, but as long as the correlation is not perfect, we gain additional information about the system from such a plot.

The only real difference between, say, our reconstruction of the DGC share price state space and the above state space from Mr. Douglas' article is that Mr. Douglas has not joined the points in chronological order. But there is certainly enough information here to work with.


Silver-price vs. gold price scatterplot.

Here Mr. Douglas has constructed a two-dimensional state space for the gold and silver price. Mr. Douglas has correctly pointed out that there are two groups of points each of which indicate a linear relationship between the two prices.

Mr. Douglas has performed the linear regression for us, and the result is not necessary for our critique.

The group at the left (with the black regression line) represents the gold- and silver-price states over the period from June 2003 until September 2008, whereas the right grouping represents the same over the period from September 2008 until September 2010.

Mr. Douglas asserts that the differences between these two groupings--combined with the lack of such in the gold-copper price state space--is evidence of manipulation. The World Complex will take a contrary postion--that this particular argument does not support the presence of manipulation (again we will note that World Complex agrees with Mr. Douglas that there is manipulation, but its existence is not supported by the above argument.

The principal evidence against Mr. Douglas' argument is that natural dynamic systems spontaneously develop the same structure within their state spaces. Below is a frame from a poster presented at the spring AGU conference in 2009. (abstract is the last on the list here).















The above graphs relate to acoustic characteristics of sedimentary layers in a relatively young marine basin--Emerald Basin, which lies on the Nova Scotia continental shelf. There are many difficulties in obtaining the above data, which explains its sparseness. There are also some unresolvable issues that cause unquantifiable errors, so the data are not nearly as precise as the data sets of Mr. Douglas.

As happens with Mr. Douglas graph of silver price vs gold price, some of the above units can be divided into two distinct groups each of which would have a different linear relationship. The reason that this happens relates to changes in oceanographic conditions, which effects a "reset" of the relationship between the two variables. Critically, however, the slope of the regression lines for the different groups is nearly the same (ideally the same in accurate data).

Note that the slopes of the regression lines in Mr. Douglas' gold-silver price chart are both quite similar. That slope is a defining characteristic of the relationship between the gold and silver price over the period of the plot (we would find significant changes in that slope using historical data, because that slope is 100 times the reciprocal of the gold:silver price ratio).

How do we relate our argument about natural dynamic systems to this particular economic dynamic system. Instead of changes in oceanographic conditions, we might look for any change in the economic system that happened at the point where the dynamics change. That change is apparently in September of 2008, at which time there most certainly were exciting things happening in the economic sphere.

The financial crisis that came to fruition in September 2008 may have caused the market to suddenly re-evaluate its perception of the correct relationship between the gold price and the silver price.

Is it possible to test this hypothesis?

Let's reconsider the gold-copper state space.



In this plot I have sketched in a few regression lines for groups of points on the copper-gold plot Mr. Douglas presented in his original article (actually I'm really skeptical about the little one on the far right).

I am not privy to Mr. Douglas' original time series, so I have no way of knowing whether the different groups I have indicated actually represent different intervals of time. If I were to guess, I would guess that the third line from the left starts at about September 2008, and has resulted from the same economic head-bonk that reset the gold-silver price relationship. The others in the copper -gold plot above may result from smaller economic hits, which primarily slowed the economy, affecting the copper price.

Although the World Complex disagrees with Mr. Douglas' argument in this particular case, we do agree with the manipulation of the prices of precious metals, and expect our readers to act appropriately.

(Damn--I was trying to find that huge "Buy gold" message Sprott had last year.)

Tuesday, September 21, 2010

A chaotic toy

Here is a fun toy illustrating chaos--the three body problem.

Lots of fun. Here are some screenshots.




Try two bodies and see for yourself sensitivity to inital conditions.


Both planets start off with almost the same initial conditions, and as time progresses they drift farther apart. The rate of this drift (which would be described by the Lyapunov exponent) is tied to the inital mass you give Sun 1 (on the left). When the value is very low, the Lyapunov exponent is similarly low, and it takes a long time for the orbits to diverge.

This is a lot like a game we used to have years ago, except you would set the initial position and velocity vector for a number of celestial bodies around a central Sun. It was great fun having them collide, and vexatious trying to make a stable solar system.

Sunday, September 19, 2010

Bifurcations in the paleoclimate system: an alternate view

In our last installment, we looked at a sequence of probability density plots of the 2D global ice volume state and inferred that the global climate has undergone changes in organization which we call bifucations.

A reviewer of a recently submitted paper asked why I use a 2D reconstruction when all of the metastable states appear to lie on a line, suggesting that one dimension might be enough.

Let's see what he means.

This is one frame of the animation in our last installment. (I suppose this should be Gipp, in rev.) The five Lyapunov-stable areas A2 to A6 all appear to lie on a line, approximately defined by y = x. So do we need two dimensions?


The above is a frame from a poster from the AGU fall meeting in 2008. The probability density is calculated from the last 750,000 years of the 2-d reconstructed phase space portrait from the ice volume proxy of Shackleton et al. (1990). The curved line is the 2-d phase space reconstruction over the last 125,000 years. At right we see the last 500,000 years of the proxy record.

By inspection we see that for the phase space portrait to plot in the A2 metastable state, the value of the proxy must be between 3.8 and 4.0. This is a necessary condition but is not sufficient for the state to plot on A2.

Similarly, for the A3 metastable state, the value of the proxy must lie between about 4.1 and 4.3, which is again necessary but not sufficient. To see why, consider the portions of the reconstructed state space near 128,000 years (128 ka), near 68 ka, and near 5 ka (the three blue circles on the reconstructed state space). At all three of these times, the proxy function has values between 4.1 and 4.3, yet none of these points correspond to the LSA--something which would not be obvious purely on the basis of the proxy function.

Such is the hazard of trying to determine the existence of metastable states on the basis of the time series without "unfolding" it into a more appropriate dimension (please note that 2-D is not an adequate projection either, but it does eliminate almost all of this sort of error).

The state space over the interval approximately from 58 ka until about 35 ka plots within the A4 LSA (note blue arrow connecting the appropriate regions on the two plots above). However, at the interval near 11 ka, the proxy function has the appropriate value for the A4 LSA (about 4.6)--nevertheless, the state space does not plot within the LSA, and again provides misleading information if one merely calculated probability density in one dimension.

The reason that the LSAs all line up is due to the topological equivalence between the two dimensional phase space plotted using the time lag method (as above) with one using the time derivative method. There are types of LSAs that would not plot along the y=x line--in the type of plot we have above we would see two corresponding patches representing the intersection of a toroidal limit cycle with the plane of the plot, each of which would be on opposite sides of the y=x line. The physical meaning of such an LSA would be of an oscillating glacial state--not impossible, but I can't interpret it from the record.

The LSAs A1 to A6 (in the animation) all represent relatively stable global ice volumes--achieved through a kind of dynamic stability that takes into consideration such factors as the slope of the continental shelves, anchoring points on the shelf or in coastal areas and lithospheric and asthenospheric reaction to ice distribution.

Livina et al. (2010) use a one-dimensional approach from a data set to infer the existence of multistability in the climate system over the past 60,000 years using proxies of Greenland temperatures over that interval.This is exactly the problem we are trying to solve (different data set though)!

The basic approach is to find the probability density of the data from the time series (what we have called one-dimensional here).

A brute force method would be as follows:

Just for fun I'm using a different data set here--this is believed to be a proxy for Himalayan monsoon strength over the past 2+ million years.

To find probability density by a brute force technique, you would draw numerous boxes (yellow) and calculate the length of time the proxy function plots within the box. As the box is slowly migrated from bottom to top, and these different lengths of time are plotted (green wiggle at right--which I have semi-arbitrarily sketched--sorry, I haven't calculated this), you have a probability density plot of the data. This answers the question of whether certain values are more common than others. The more "popular" values may represent metastable states (or Lyapunov-stable areas).

Now, to make things even more interesting you might do what I did in creating the animation in a previous post. Instead of calculating the probability density over the entire data set, you calculate it over a succession of "windows" of time (100,000 years duration, say--the choice depends on what you hope to show). Then you would have a probability density graph for each of these different windows (and there is no reason why the windows couldn't overlap).

If some of your windows had four probability peaks, and others had two or even one, then voila! you have demonstrated a bifurcation.

This is the essence of the argument of Livina et al. (2010). Now, dear reader, if you have been paying attention you will see the problem.

The technique just described overestimates the time spent actually occupying the stable states. To determine how much for the particular data set of Livina et al. (2010) would require some time with the data--which I'm not sure I can spare now.

Livina and her coauthors do not use the brute force method described above. They estimate the probability density using a Gaussian kernel estimator. Okay: I would do the same thing if I had a record as densely sampled as theirs. (There is a two-dimensional estimator that works on the same lines).

They calculate a potential based on the linear probability density estimate and a noise estimate, and approximate the shape of their (linear, remember) potential estimator by a polynomial of even order. The potential estimate [U(z)] has a well (local minimum) corresponding to peaks in probability.

People, people, please. If I've told you once, I've told you a thousand times*--use Chebyshev polynomials! Seriously. You'll thank me later.

The advantage of orthogonality is key. Chebyshev polynomials are orthogonal whereas standard polynomials are not. (For our purposes, the orthogonality means that the integral from -1 to 1 of the product of two Chebyshev polynomials (say Tn(x) and Tm(x) where m is not equal to n) is zero. Sorry, blogger is crappy at displaying this.

Because of the orthogonality of the individual functions, you can use a Fourier-type process to transform your original function into one composed of Chebyshev polynomials (technically this is a Fourier-Chebyshev transform). Such a transform has many similarities to a Fourier transform, such as the limitation on the highest-order Chebyshev polynomial (called the Nyquist Chebyshev number) being defined by your sample interval.

Codes for FC transform should be in Numerical Recipes. They were in the 1989 edition.

Data sets are noisy? Yup! (Paleoclimate sets are notoriously so). You can design filters in the Chebyshev number domain and eliminate some of that unpleasant high-Chebyshev-number noise (like high-frequency noise, only better).

You can also design filters in the Chebyshev number domain, which can be applied to your function without the need for a Fourier-Chebyshev transform by reverse transforming the filter into a series of convolution coefficients and convolving with the original data series. Just as in standard filter design, your filter has to be designed without sharp corners so that the convolution series remains small.

You can define the significance of the different probability peaks in terms of the relative significance of (in the two state case) the fourth order Chebyshev polynomial with respect to the first or second, or of the sixth order polynomial for a three state case.

Livina and her coauthors use a disingenous method of defining the number of wells in their potential function. Since their estimate for the potential is a polynomial of even order, their contention is that the number of wells is better calculated by counting the number of inflection points (where the second derivative is zero--or where the direction of curvature reverses) rather than the number of local minima.

It appears that they may be better off counting both. Here's why.



Here we see a potential function U(z) which is a polynomial of even order. The "+" represent regions of the curve where curvature is positive and the "-" represent segments of the curve with negative curvature.

There are four inflection points in the graph at left. Using equation 7 of Livina et al. (2010), we would infer the presence of three stable states.

By inspection we see only one real potential well, as well as one degenerate solution where both the first and second derivatives are zero. This type of point (just to the left of the origin) is of tremendous significance in dynamic systems analysis, but goes unnoticed by simply counting the inflections.

Compared to the Gaussian kernel estimator, looking at all critical points is a simple matter and is better not overlooked.

The Livina et al. (2010) methodology would be much improved by working in more than one dimension. I have worked in two, but for the data sets I am working on, going to three would be an extremely intense computational exercise. But for data sets with annual sampling over 60 ky or more, going to a three-dimensional probability estimator of the density of points in a three-dimensional phase space would be a huge step forward.

* I know, I know--I've never told you. But I'm telling you now.

References:

Gipp, M. R., 2008. Semiquantitave Characterization of Dynamic Response of Long Records Paleoclimate Records. [damn typos] American Geophysical Union, Fall Meeting 2008, abstract #PP13A-1428.

Gipp, M. R., in rev.  Imaging system change in reconstructed phase space: Evidence for multiple bifurcations in Quaternary climate. Submitted to Paleoceanography (in revision).

Livina, V. N., Kwasniok, F., and Lenton, T. M., 2010. Potential analysis reveals changing number of climate states during the last 60 kyr. Climate of the Past, 6: 77-82. doi:




Update November 14--see a new animation here

African economics

The air in Agbogbloshie market, Accra, is redolent of onion. In fact the market is crammed with more onions than the mind can easily conceive.

Trucks loaded with them fill a large square behind the merchants in their wooden stalls that line the road for almost a kilometre. There are onions in every meal in Ghana. It strikes me as strange, because the onion is nowhere near as famous in West African literature as the yam.


And yet, here are all these onions. All the vendors are together. More on this in a moment.

Not far away is the Agbogbloshie metal market. Once again, hundreds of different vendors selling various metal products (rebar is a common theme) are all packed together in close quarters.


Rebar salesmen. Used rebar is also available.

The famous metal man of Agbogbloshie.

Another example--coffin makers. They are all grouped together (logically) around the hospital. The buildings below are all around the Korle-Bu hospital in Ghana.


Business is booming.

Why are all these vendors together? Didn't they consult with Toronto City Councillors? (In the early phase of Toronto's ill-fated food cart program, politicians decided where the carts should go. The result has been bankruptcy for all the cart-owners.)

What is interesting about the markets in Africa is the absolute absence of government regulation. Not to say that this hands-off policy is the result of a philosophical debate--it's just that the government has no resources for meddling in the local economy.

Sadly, the national economy is a different matter.

For this reason, studying African marketplaces is about as close to the spirit of true capitalism as you can get. The local merchants succeed or fail purely on their ability to please their customers. And the system works as well as it can.

In a future article, we will consider public transit.




The mighty tro-tro!

Thursday, September 16, 2010

Festival of Kundum

It's that time of year again. I'm not in Ghana yet, but should be before the end of the month. Here are some shots from the Kundum festival in Axim (western region of Ghana) in 2008.

Man in ent costume. Maybe from the back, but who can tell?

Regatta about to begin.


video

There is a lot of dancing and drumming, and although the festival lacks the polish of something like Carnivale, it is a good time in its own right.




The chief is carried by the crowd back to his home.





video

The chief returns to his house.

But it isn't just about carrying the chief home. There is a formal order to everything which is difficult to convey, particularly when you are wandering aimlessly through the crowd and are not aware of the symbolism of everything. This is not simply a harvest festival.






Crowd has gathered at the foot of Axim Castle to witness child acrobats.


Woman in "devil riddance" outfit.



Race. That just may be one of the Montreal Canadiens helping the team on the right.

Have fun.

video

Wednesday, September 15, 2010

Probability density in phase space part 2: Detecting bifurcations

Continuing from last time.

We can use a succession of probability density plots (PDPs) in sequence to show whether the number of Lyapunov Stable Areas (LSAs) changes, or whether the locations of individual LSAs moves. If the PDPs are displayed quickly enough, they may be displayed as a movie.

The significance of this is that a change in the distribution of LSAs may be indicative of a bifurcation in the system. A bifurcation is suggestive of a reorganization of the system, often appearing as a sudden response to a gradual change in one or more system parameters which has crossed some critical threshold.

Let's look at at the bifurcations in the global ice volume suggested by a sequence of PDPs from the past two million years approximately (original data from Shackleton et al., 1990).

Photobucket

There are a number of book-keeping details in this video. The sliding scale bar at the right shows the interval of time represented by each frame. There is a little black bar representing 30,000 years of missing data (that section of the core was badly undersampled.

The big red interval on the time bar labelled "MPR" represents the time of the "Mid Pleistocene Revolution" (also called the Mid-Pleistocene Transition) which represents a major change in the operation of the climate system.

The probability density of every frame was calculated on the basis of 270,000 years of data, so the blocks that include the undersampled bit actually represent the available data over an interval of 300,000 years. There is a possibility that some bias may have resulted.

As is normal for geological projections, the earliest PDP is presented first. The movie therefore flows forwards in time, and we see that in general, over the past 2 million years, ice volumes have increased (low ice volume is at lower left and higher ice volume is at upper right). The system has become more complex, as there are generally more stable zones in the later part of the record as opposed to the former.

Much of the complexity can only be appreciated by going back and looking at the actual state space superimposed on top of the probability density plot. One way of establishing the complexity of the climate system is by computing the complexity of the simplest epsilon machine that reproduces the patterns observed within the data (see also here--will do an entry on this at some point).

There is another abstract here which expresses the same ideas with different terminology.

Next in the series we will compare this method with alternate approaches taken from the literature.

Tuesday, September 14, 2010

Probability density in phase space part 1 (formerly How life imitates . . .)

Probability density in phase space

Dynamic systems can be interpreted qualitatively by describing their phase spaces in terms of the type, order, and number of attractors (or areas of stability) that are traced out as the system evolves through time. Some systems may be described by a number of disjoint Lyapunov-stable areas (LSA), each separated in phase space by a separatrix (Kauffman, 1993). At any given time, the state of the system occupies only one such LSA, so that their number therefore constitutes the total number of alternative long-term behaviors, or equilibrium states, of the system. 

Since an LSA is likely to be smaller than the total allowable range of states, the system tends to become boxed into an LSA unless it is subjected to external forcing. When the state approaches a separatrix, small perturbations can trigger a change to a nearby state, which can result in chaotic changes in the evolution of the system.


Monday, September 13, 2010

Modeling is hard part 2

Now let's look at what seems to be a straightforward problem with a straightforward answer, and then consider the complexities that make prediction difficult. We will see that once again, the individual elements in the system behave unpredictably.

Some years ago, the woman who later became my wife was practicing her panning technique in the Don River, in Toronto. (She was about to to recover gold grains from some of our fine concentrates from a West African sampling program).
 
I myself am not good at panning. I never had the patience for it. My wife does, which is a blessing as she is married to me.

She got an unpleasant surprise from the gravels at the first bend north of the Viaduct, where the river runs up against the railroad tracks.

The pan was full of mercury.

What happened after alerting the authorities was somewhat amusing but a digression from our current discussion (complex indeed, but not entirely unpredictable). So it has been omitted.

A few years later she carried out a sampling program looking for mercury in the Don River. The samples were collected in locations as advised by another local geologist, who had many years of experience in placer operations worldwide. I also helped with some of the heavy lifting.

Samples were collected in the winter, when the chances were greatest of finding elemental mercury.

Elemental mercury was indeed recovered. Some of the droplets were beautiful. Also noted many interesting textures where mercury had alloyed with other metals.
Many droplets were covered with small plates of metal amalgam. Some of these small plates were covered with spheres, as if the mercury vapour itself condensed on the plates.


Electron microscope imagery of a mercury droplet (left). Scale bar
is 100 microns in lenght. Detail of droplet (right) showing spherical
textures on mercury-amalgam plates. The small "bubbles" at upper 
left are due to electric charges building on the mercury droplet while
under the electron beam.


Small spheres of mercury amalgam were common at many sites in the Don River, but the richest sites were at the location of old landfills.

There is a lot ornamentation on some of these spheres--but am not sure of the causes of it. I would note that we see the same thing on iron spheres of about the same size that we find in West Africa.

There were no surprises during the study--no pans full of mercury. Concentrations at several sites multiplied by volume of sediment in river divided by total surface area of the drainage basin resulted in a mass balance calculation consistent with the amount of mercury expected purely from rainfall.

It turns out that the mercury surprise found earlier came from a spill of sewage from the North Leaside treatment plant in August of 1997 (a few days before the panning expedition). The references to the spill seem to have vanished from the internet, but the article by McAndrew (1997) in the Toronto Star covered it.

At the time we calculated that given the size of the spill, the amount of mercury that may have entered the river could have been in the hundreds of kg, while still satisfying the government regulations on mercury concentration in discharged effluent. Such an even released many times more mercury into the river than enters through rainfall (I hesitate to call it "natural" as most atmospheric mercury results from coal burning).

The local authorities monitor mercury levels in the Don River by measuring mercury content of all of its feeder streams, but make no measurements within the Don itself. The sewage spill went directly into the river, so it is no surprise that our Dear Leaders had no idea of its presence until they were told of my wife's discovery. Now who would have predicted that?

Artisanal mining

Mercury use is endemic among artisanal miners. While I did not find it in use in the coastal region of Ghana (where some of the gold is very coarse), Company representatives did find "white gold" in sands and gravels in the Pra River.

By white gold, I mean an amalgam of gold and mercury. Such a finding means that somebody is using mercury to amalgamate the fine gold grains (it makes them stick together), but is also somehow still losing some of it off the end of the sluice.

Traditionally, the amalgam is purified by heating the white gold over a fire. But other methods work as well.

The hazard of course is that by breathing the fumes you end up like the Mad Hatter.







"Would you like a drop of mercury with that?"













So we used to show off this bag of white gold at shareholder meetings, and somehow after each meeting there seemed to be less of it than before.

One shareholder scoffed that the grains in the bag weren't gold. We let him take one home. I explained that it was amalgamated with mercury and that he shouldn't heat the grain up. He told me he didn't believe me.

He phoned back very excitedly the next day. It was amazing! He put the grain on his stove burner, turned it on, and the grain turned to gold.

And a little more mercury entered the Toronto atmosphere. Now who would have predicted that?

Reference

McAndrew, B., 1997. Don River spill may threaten city beaches. Toronto Star, August 19, 1997. 

Sunday, September 12, 2010

Modeling is hard part 1

By modeling here I mean simulating some real system, whether it be physical, economic, or social. Not the other kind.

There are at least two sources of complexity which affect our ability to model a system.

One is unforeseen complexity in the apparently simple mathematical relationships between variables in the system, usually arising from the nonlinear components (unpredictability arising when we use the right equation).

Another is the lack of understanding we have of how the individual elements in the system actually behave (unpredictability arising when we use the wrong equation).

I had a cousin who was working for TD at the time they were beginning to roll out their ATM system (they called them "Green Machines"). She described for me all of the testing procedures, whereby all the programmers tried to make every mistake they believed customers could possibly make. As problems arose, they fixed them.

The first question that the machine would ask a customer was "Do you have your green card?" and there were buttons for yes or no. If you didn't have a green card, you could use the keypad to enter your account information.

So they rolled out the machines, and on the very first day, a customer entered "no" when asked if he had a card, and then stuck his card in the slot. The entire system crashed. Apparently none of the programmers anticipated that.

Moral: the range of possible behaviours of the elements of a system is greater than can be anticipated.

Addition Sept. 13

On further reflection, the above story was actually about Canada Trust bank machines, not TD. They are the same company now, but were not at the time of the story.

Thursday, September 9, 2010

Are Africans Really This Bored?



I've been slow posting lately because I've had to fix up some deficiencies with an NI 43-101 report I've been writing as well as deficiencies in a recently submitted paper.

Today's topic is about gold and where it comes from. I will show you some images from artisanal mining activities in West Africa.

This goes with the introduction, as one of the main deficiencies I had to fix were statements about artisanal mining on the property which was the subject of the NI 43-101. In particular, the securities commission took exception to my statement that vigorous artisanal mining activity was an indicator of the presence of gold.

No, apparently these guys dig holes because of all the time they have on their hands after they are done fishing, hunting, farming, or making things. 

They are just bored.

These particular bored guys are aimlessly digging holes at the edge of the Iduapriem mine property in western Ghana (owned by Anglogold Ashanti, a company in which I own no shares, do not work for, do not consult for, and have no relationship with).

Just whiling away the empty hours . . .


It's hard to get a sense of scale for some of these holes. But I would guess that a lot of people have been bored for a very long time.

Artisanal diamond diggings, Bonsa River, western Ghana.

Notice the line of stakes. You may well be very bored, but you are only allowed to relieve your boredom in a fairly small area. Don't think of alleviating your boredom on someone else's plot.


Here are some Ghanaians who are so bored they are pumping water out of an old mineshaft prior to entering it.

Artisanal gold mining activity near Akwantambra, in western Ghana. Don't know who has this property now.


Parts of West Africa are so rich with gold you can literally find some anywhere. Mining it profitably is, however, something else.


Another galamsay (artisanal mining) camp in western Ghana, about 5 km northwest of Axim. This land is currently held by Adamus Resources (no shares, no shorts, no work, no consult, no relationship apart from having a couple of beers with one of their geologists).

Yes, that man is wearing a life jacket. In the middle of the jungle.





Just to show you the lengths to which these people will go to alleviate their boredom, here is a punchplate screen (hand-punched through steel--ok, ok, they used a tool).

This is to separate the gravels from the sands.

Perhaps you get bored too. Have you ever tried digging holes?





Life was pretty boring in the 17th century too. Here, the locals from what is now called Sawoma village, in western Ghana, relieve their boredom by taking a swim in the lower Ankobra River, gathering up the rocks at the bottom, then grind them and pan for gold.






Evidence that South Africans get bored too (Big Hole in Kimberley).











People were pretty bored in Sierra Leone too.

This field is by the Makele River in central Sierra Leone--the holes are from previous artisanal gold mining.


The Sula Mountains are in the background.







Artisanal miners have mined out gold-bearing quartz veins from various hills in the Sula Mountains.



The rock is crushed, and carried to the top of a sluice carved into the side of the mountain.

This little channel is dug down the side of the hill. Natural foliation in the bedrock is used to create riffles. 


The locals collect gold stuff from the pockets between the riffles. They have no pumps, so the site is only active during the rainy season.


Ho hum.


Climbing over piles of tailings (crushed rock) in the Yirisen river, central Sierra Leone. The rock is excavated from nearby hills, and avalanched into the river.

The locals crush the rock and sluice the fines.







Gold from the Butre River, western Ghana (below). 


Anybody think Africans are interested in this stuff? Hello?

Wednesday, September 8, 2010

How Life Imitates the Stock Market, part 6: Clarifying phase space reconstructions

I will look at one approach for clarifying the phase space portraits we have constructed last time.

This approach is choosing a suitable embedding dimension, which is the number of dimensions of state space into which we will unfold our time series.

Embedding dimension

The curves in the reconstructed phase space should not intersect, if they are unfolded in the correct number of dimensions. The reason for this will become plain if we once again consider the reconstruction in terms of a function and all of its various time derivatives. Suppose that a system is completely described in a state space. These dimensions will be the scalar measurement, its first, second, and higher time derivatives (speed, acceleration, jerk, jounce or snap, etc.--in the scalar sense).

Each state is defined by a "position" in phase space, with coordinates being the scalar value, the rate of change, the second time derivative, etc.). At the next time step, a new state, with a "position" defined by a scalar, a time derivative, a second derivative, etc. each of which are uniquely defined from the previous state and the length of the time step.

For two different trajectories to intersect, the state at the point of intersection must define two alternative future states. But this is clearly impossible if the scalar and all of its time derivatives are identical. Thus, the best you can do is have two different trajectories approach arbitrarily closely. If I recall correctly, this argument was first proposed by Hirsh and Smale (1974).

One of the features looked for in a phase space portrait is groups of points on different trajectories which closely approach one another. In particular, we may find that there are regions of phase space towards which many different trajectories evolve, and these trajectories may each remain in one of these regions for some considerable time. Such regions may imply the existence of relatively stable (or metastable) states in the system, which is a possible predictor for future evolution of the system.

In order to determine whether such areas exist, we need to know if these neighbouring points on different trajectories are actually close to one another, or merely appear to do so because of the projection of our phase space.

Here we see hypothetical two-dimensional and three-dimensional portraits of the same system. In the two dimensional portrait, points A and B appear to be quite close.

In the three-dimensional phase space, we see that A and B are not close neighbours after all. If we made our interpretation on the basis of the two dimensional portrait, we would be in error.


In the diagram at left, points A and B are called false nearest neighbours. The presence of false nearest neighbours indicates that the state space is not defined in a sufficient number of dimensions (we would say that the embedding dimension is insufficient).

One therefore finds the embedding dimension by increasing the number of dimensions of our reconstructed phase space until there are no more false near neighbours.

The magnitude of the embedding dimension gives you an idea of the number of invariant parameters in the system--at least in an ideal world. Unfortunately, in the real world there is this thing called noise, which is infinitely dimensional; consequently ever higher-dimensional reconstructions reveal increasing numbers of spurious near neighbours which are demonstrated to be false in a higher dimension. One never reaches the point of having no false near neighbours.

So one may end up selecting an embedding dimension that seems reasonable. The estimate may be wrong, but can be assessed to a certain extent by studying closely the trajectories in higher dimensions using  mathematical methods to laborious to discuss at this time.

I will discuss the use of probability density in phase space next time.


Hirsch, M. W. and S. Smale, 1974. Differential Equations, Dynamical Systems and Linear Algebra, Academic, San Diego, Calif., 1974.

Wednesday, September 1, 2010

How life imitates the stock market, part 5: Choosing a lag for phase space reconstruction

We continue with reconstructing a phase space for Detour Gold Corp. (DGC-T). Disclosure -- long position (which long predates my use of this analytical tool).

A two-dimensional phase space can be reconstructed from a time series by creating a scatter plot of the time series against a lagged copy of itself. By doing so we create a plot which is topologically equivalent (though not absolutely identical to) a phase space consisting of the time series plotted against its first time derivative.

How much of a lag should we choose. It should be intuitively obvious that the lag should not be too small. If, for instance, we had pricing data for our stock that was collected at one-second intervals, a lag of 1 s would probably not show very interesting dynamics, as 1 s is not enough time for much to happen with price (except very rarely).

The price data in the DGC pricing time series is at one-day intervals, so that one day should be the smallest lag we can look at. Well, we could look at a zero lag plot, but it is not very interesting.

The zero-lag 2-d phase space for Detour Gold. Actually, it does show something interesting--there is something of a gap in price from about $25 to $27 and $28 to $29. But we could have learned that from a simple one-dimensional plot.

A lag of one day (for instance plotting Friday's closing price against Thursday's, etc.) looks a little more interesting. . .


It is a little difficult to make out where the graph starts, but its ending stands out, with the last price in our time series being just above $31. Here is the phase space with a lag of two days . . .


Many of the same features are present, but the curve diverges a little more from the diagonal line. Even though equivalent features can be found in both of the lagged time series, the specific shape of corresponding features changes somewhat from one lag to another.

Here is the phase space with a lag of three days.



And for four . . .



Nothing much new here. The overall form is the same, however you may notice that the lower part of the plot is spread away from the diagonal more than is the case in the one-day-lag phase space. The last plot also is not quite as tangled as the one-day-lag phase space.

Overall, this is not a very interesting plot. Part of it is because the dynamics of the stock performance over the last nine months has not been very interesting--at least from a dynamics perspective (it is always interesting to see a stock for which one has a long position rising).

The reason for the minor differences in the reconstructed phase spaces has to do with the connection between the lag, and the portion of the curve over which the slope (the first derivative) is calculated. When we used the price difference last time, we were actually using that as an estimate of the slope of the tangent to the price function. When I chose to take the price difference over a two-day period (and then divided by two), I was estimating the tangent slope from two points that were separated by two days.

In the small diagram at left, we are interested in finding the slope of the tangent (yellow line) to the curve at point T.

By using the price difference over two days, we calculate the slope of the green line segment, which is our estimate of the slope of the yellow line.

Because the only data we actually have are represented by the red blobs, we cannon find the slope of the yellow line directly.

The lag plot essentially estimates the slope of the yellow line differently.

For the one-day lag, your estimate of the slope is the short red line segment; for the two-day lag, the orange segment; three-day lag, the green; and for the four day lag, the estimate for the slope of the yellow tangent line is the slope of the blue segment.

Clearly, none of these look like particularly good estimates. So what is the relative merit of choosing one lag over another.

If the time series is especially busy, with a lot of variability, then the slopes estimated between neighbouring points will similarly show a lot of variability. If the the slopes are estimated between points that are farther apart (longer lag), there will be less variability.

The choice of a lag can therefore be influenced by the scale of the dynamics of interest. If you are interested in short timescale dynamics, you need to use a short lag, and if you are interested in longer-term dynamics, you need to use a longer lag.

The formal rules (first minimum of the average mutual information) is selected because this maximizes the differences between the two axes so that the maximum information is revealed.

Unfortunately, the DGC plots don't show a lot of interpretable dynamics. So let's look at some functions with some interesting dynamics.

The plot at left shows the plot of the proxy function for global ice volume over the past million years. Data from Shackleton et al. (1990).

In various papers and presentations over the past ten years I have used this record and others to study global climate dynamics.

The2-d state space reconstructed below used a lag of 6,000 years and covers about half of the record shown above (from 500,000 years ago until present).

Now here we have a phase space which allows us to interpret a lot about the dynamics of the system. It is a little complex, and like other plots above, it just consists of numerous clockwise loops overlapping each other.

The record would look better rendered in three dimensions, as the apparent intersections would be shown to be areas where the function passes beneath itself.

In the coming discussions we will look at ways to clarify the plot at left and see what can be interpreted from it.


Reference:

Shackleton N J, Berger A, Peltier W R. An alternative astronomical calibration of the lower Pleistocene timescale based on ODP Site 677. Trans R Soc. Edinburgh Earth Sci, 1990, 81: 251―261.