Continuous Retrospectives are not a substitute for retrospective meetings

About a year ago, I proposed to our team that we should adopt a continuous approach for our retrospectives. One of the questions that came up was whether this should be a replacement for regular weekly (or once per sprint) retrospectives.

After a year, I’ve come to the conclusion that you probably need both.

I came up with the idea of a continuous retrospective to fix a specific problem with regular retrospectives: halfway through the sprint, you realise that something is a problem, but by the time you get to your end-of-sprint retrospective, you’ve completely forgotten about it.

Another advantage of continuous retrospectives is that, in theory, they can shorten the turnaround time for resolving problems, so points of friction don’t linger throughout the sprint.

However, I’ve found over the past year that continuous retrospectives have one particular flaw: they can easily lose momentum. It’s all too easy to get complacent about continuous improvement, to end up having your retrospective board sitting neglected in a corner with nobody ever adding anything to it, and before you know what’s happened, you’ve gone from sub-optimal retrospectives to no retrospectives at all.

If you’ve bought into the 37 Signals/Basecamp/whatever they’re called this week “meetings are toxic” ethos, the idea of having one less meeting may sound attractive. However, the ceremony and formality involved in regular retrospective meetings gives them an inertia that keeps them going. It forces each team member to be regularly thinking about what is going well and what isn’t. And it provides a forum for the issues to be discussed in more detail rather than coming up with off-the-cuff solutions that may not be properly thought out.

Continuous retrospectives can definitely offer considerable value. But don’t ditch your end-of-sprint (or weekly) retrospectives. Rather than doing one or the other, you’ll most likely get the best value out of doing both.

Ten things you need to know about the age of the earth

Now as I keep saying, my position on the whole creation and evolution debate is simple.

Make sure that your facts are straight.

Genesis 1-11 is a part of the Bible that leaves a lot open to interpretation, and while it may seem audacious and bold and uncompromising and full of faith to opt for the most radical interpretation (a Literal Six Day Young Earth Creation, non-evolution, dinosaurs on Noah’s Ark etc), if you’re supporting this position with demonstrable falsehoods and displays of ignorance, you won’t be upholding the Bible; on the contrary, you’ll be undermining it.

Unfortunately, I frequently see well-meaning but badly informed Christians making claims about the age of the earth, and about how it is determined, that are demonstrably and indisputably untrue. Some of these are just rumours and hearsay, and some of them just demonstrate ignorance, but there is also a lot of misinformation out there being published by certain people with PhDs who should know better.

So before you rush into the debate with all guns blazing, here are ten things that you need to know about the age of the earth, the ages of rock strata, and how they are determined.

1. It is based on measurement and mathematics, not on guessing or presupposition.

I sometimes hear people claiming that determining the age of the earth is “guessing at best,” or that if you looked at the evidence with different, young-earth “glasses,” you’d get different, young-earth results. Or that old-earth results are based entirely on an a priori commitment to the theory of evolution and philosophical naturalism.

This is nonsense. The age of the earth is determined first and foremost by measuring things. Measuring and guessing are complete polar opposites, and measuring anything gives the same result no matter what “glasses” you look at it through. You could look at Mount Everest with glasses that make it look like it’s just four inches high all you like, but that won’t stop you from getting a height of 8,848 metres when you actually measure it.

Anyone who tells you that rocks don’t come with time stamps doesn’t know what they’re talking about. Rocks contain radioactive elements such as uranium-238, potassium-40, rubidium-87 and so on, which decay exponentially at well-established rates. When a rock cools below a certain temperature (the “closure temperature”), these elements are “locked in” in ways that follow certain highly predictable patterns. Scientists can then take samples from the rocks, measure their composition, determine how much these patterns have changed, and in that way determine the age of the rock.

You may have heard that radiometric dating has to make assumptions about the original composition of the rocks — how much of the “parent” and “daughter” isotopes were originally present. This is not true. There is a technique called isochron dating which avoids this assumption altogether. By taking multiple samples from the rock, you can plot a graph of 87Rb/86Sr against 87Sr/86Sr: its slope will give the age of the rock without having to know anything about its original composition. If there has been any contamination or leakage, the points on the graph will not lie on a straight line.

2. Its assumptions can be — and are — rigorously tested.

The claim that “historical science” relies on assumptions that can’t be tested because nobody was there to check (the “were you there?” argument) is simply not true. Historical assumptions can easily be tested by cross-checking different dating methods whose assumptions are independent of each other.

One particularly spectacular example comes from measuring rates of continental drift. In places such as the Hawaiian islands, the dates of lava flows increase linearly with distance from the hot spot in the earth’s crust over which the various islands have formed. In recent years, it has also become possible to measure continental drift directly using GPS. Everywhere we look, the measurements are exactly the same within the measured range of errors.

I’ve occasionally seen claims that different methods only give the same results because they make the same assumptions of uniform rates, or because they adopt the same worldview. This is patent nonsense. The whole point of cross-checks is to test assumptions, not to make them. In any case, any alternative explanation in which the rates weren’t constant would need to have something affecting all the different measurements in exactly the same way, in exactly the same proportions, in exact lock-step with each other, by a factor of up to a million. Since the different rates include nuclear decay, continental drift, formation of tree rings, lake varves, ice core layers, coral growth and a whole lot of other things as well, such a proposition is quite frankly preposterous.

3. Its reliability is measured and calculated as well.

The scientific consensus is that the age of the earth is 4.54 ± 0.05 billion years.

Note the second part of that figure — ±0.05 billion years. It indicates an uncertainty of just one percent. It means that scientists have a 95% confidence that the age of the earth is no younger than 4.49 billion years and no older than 4.59 billion years. Uncertainties fall away exponentially as you move away from the centre of the range, and the chance that the error could be even three or four times as big as that figure is so low as to be effectively zero.

Error bars and uncertainties are not guesses either. They are determined by taking multiple measurements and calculating a statistical property called the standard error of the mean — a precisely defined quantity derived from the “spread” of the results about their average value. There are similar formulae for how to calculate uncertainties when calculating the slope of a graph.

This figure also takes into account the confidence levels that scientists have that rates such as nuclear half lives are constant. They do not blindly assume that these quantities never changed in the past; on the contrary, they study the evidence to determine limits to how much these quantities could have varied over billions of years. These limits are plugged into their formulae.

The end result gives an error of just one percent. Your car’s speedometer is less accurate than that. And six thousand years falls so far outside of this error range as to be ridiculous.

For what it’s worth, the fact that its uncertainty is known contradicts the claims of some YECs that old-earth ages merely arise from old-earth presuppositions designed to make space for evolution. There is no possible way whatsoever of starting off with vague, non-specific “old-earth presuppositions” and ending up with a final result that is constrained to within just one percent.

4. Much of the evidence comes from the oil industry.

Jonathan Baker, a Christian geochronologist and author of the Age of Rocks blog, explains this quite clearly in his article, “Can Young Earth Creationists Find Oil?” As he explains, oil companies need to know both the ages of the oil deposits and their geothermal history. Too young, or too cool, and the deposits will be “premature” — still solid, and impossible to get out of the ground. Too old, or too warm, and they will have been baked into oblivion.

There is no way that petroleum geologists could be artificially inflating the ages of oil deposits and rock strata in order to accommodate an evolutionary worldview. They are paid to produce results that are correct, not results that are ideologically convenient. If their measurements really were ideologically motivated in that way, oil companies would waste a fortune (and a lot of political good will) drilling in all the wrong places, the geologists would be fired and spend the rest of their working lives flipping burgers in McDonald’s, and the radiometric labs would be sued out of their insurances.

5. There is no circular reasoning involved.

The claim that “fossils are used to date rocks and rocks are used to date fossils” is so misleading that it is, to all intents and purposes, a flat-out lie. Rocks that are used to date fossils are dated first and foremost by radiometric dating. Index fossils are then only used to date rocks that can not be dated radiometrically. There is nothing circular about that.

Similarly, claims that nuclear decay rates are determined from known ages of rocks are also misleading. This would only suggest circular reasoning if this were the only way that these rates were determined. It is not.

Reasoning is only circular if you have two different lines of evidence that both depend entirely on each other. In every claim of supposedly circular reasoning that I have seen, there have been other independent lines of evidence involved which haven’t been mentioned that break the circularity.

6. Fewer than 10% of radiometric results are “bad.”

There is a vast difference between “doesn’t always work” and “never works.”

Young earth organisations love to point out cases where radiometric dating didn’t work — for example, when different methods gave wildly different, or otherwise apparently wrong, results. However, if radiometric dating really were a chaotic mess that couldn’t distinguish between thousands and billions of years, these “bad dates” would be ubiquitous, while close agreement between different dating methods would be all but non-existent.

This is not the case.

Dr. G. Brent Dalrymple, one of the foremost experts on radiometric dating, estimates that no more than 5-10% of radiometric results are “bad.” In other words, more than ninety percent of the time, there is no disagreement; the resultant dates are as expected; and different dating methods do indeed give the same result.

Anomalous dates are not unexpected, and usually indicate that the rocks in question had a complex geothermal history, being heated sufficiently at one point or another to partly reset their “clocks.” However, far from demonstrating that radiometric dating never works, the fact that these cases are the exception rather than the rule demonstrates that 90% of the time, it works just fine.

In any case, there are over forty different isotopes used in radiometric dating, and each will work for some kinds of samples but not others — again, cross-checks between different methods tell us which is which. The fact that carbon-14 does not work on traffic cones, for example, does not prove that uranium-238 does not work on zircon crystals in granites.

Furthermore, the disagreements between different radiometric dates catalogued by the RATE project differed by up to about twenty percent. Disagreements of just twenty percent in a minority of results do not justify claims that all dating methods are systematically in error by a factor of up to a million.

7. Radiometric dating is expensive.

The number of samples that have been dated by multiple methods, with no surprises and with an agreement to within one percent or better, to ages far in excess of six thousand years, runs into the hundreds of thousands. Each sample costs as much as a small car to collect, store, process, and analyse in order to determine a date.

If “evolutionists” really were “throwing out dates that don’t fit their preconceived notions,” millions of results costing hundreds of billions of dollars in total must have been discarded over the past sixty years or so.

Why are there no accountants and bean counters creating a stink about this colossal amount of money being thrown away on wholescale scientific fraud? Given the number of young-earth creationists in the US Congress, why are none of them proposing the simple fix to the problem of requiring pre-registration of all radiometric studies? And why is there nothing about it on Wikileaks?

8. There are no reliable findings of primordial radiocarbon in ancient coals and diamonds.

The RATE project, a young-earth creationist research project that ran from 1997 to 2005, claimed to have found carbon-14 in ancient coals and diamonds that by rights should not have contained any.

Their work was reviewed by Kirk Bertsche, an evangelical Christian radiocarbon expert, who pointed out that not only were the amounts of radiocarbon very low, they also showed clear patterns that were characteristic of contamination. For example, the amount of radiocarbon in heavily processed samples was found to be much higher than those that had undergone comparatively little processing.

Furthermore, he pointed out that although they did attempt to take contamination into account, they did not follow the correct procedures for doing so. High-end radiocarbon laboratories go to extreme measures to avoid contamination, even so far as operating in specially constructed buildings with shielding against cosmic rays. The RATE team merely subtracted a “standard background.”

Contamination in carbon-14 dating is well studied and its vectors are well known. Before ancient samples can be demonstrated to contain primordial radiocarbon, contamination must be rigorously eliminated. To date, no finding of primordial radiocarbon in ancient coals and diamonds has adequately done so.

9. Young-earth “evidences” are based on low-precision measurements and unrealistic assumptions.

Consider the amount of salt in the sea — quoted by Answers in Genesis as one of their ten best evidences for a young earth. A number of things are immediately obvious.

  • It is based on rates that are extraordinarily difficult to measure, requiring massive international surveys.
  • No error bars in the measurements are quoted; only a vague hand-waving claim about being “generous to uniformitarians.” The error bars would almost certainly be large — well above ±10% for many of the different factors involved.
  • It is based on rates that can not realistically have been the same in the past as they are today, even in a “uniformitarian” model.

When up to date, accurate figures are used and everything is taken into account, a state of equilibrium — where the long-term amount of salt leaving the sea is the same as the amount entering it — is well within the range of experimental error. In fact, there is no evidence that the seas really are getting saltier with time.

Now compare that to radiometric dating.

  • It only requires a relatively small number of measurements (a few dozen) from each rock formation.
  • The error bars in both the measurements and the final result are a fraction of ±1% or better, and are always quoted.
  • It is based on rates that can not realistically have varied in the past without proposing radical new laws of physics for which there is no evidence whatsoever.

The amount of salt in the sea may be only one “evidence” for a young earth, but it is far from unique. Most of the “evidences” are vague at best about exactly what upper limit they place on the age of the earth, and in some cases don’t place any upper limit on the age of the earth at all. On the other hand, old-earth studies tend to be high precision, and the limits they place on the ages of the samples that they test are tight. The underlying assumptions are very, very reliable, for the simple reason that…

10. Accelerated nuclear decay is science fiction.

Perhaps the most outlandish claim that I’ve ever seen coming from the young earth movement is the RATE project’s hypothesis that nuclear decay rates must have been a billion times higher at certain points in the past six thousand years: notably during the first two days of Creation Week and during Noah’s Flood.

What makes this one so outlandish is that they themselves admitted that this would have released enough heat to raise the temperature of the earth’s surface to 22,000˚C — nearly four times that of the surface of the sun. Not only would there have been no Flood left, there would have been nothing left to be flooded!

On top of that, they acknowledged that no known thermodynamic process could have removed this amount of heat fast enough, and that any cooling process would also have had to have cooled rocks such as granites much faster than water, otherwise the oceans would have frozen over.

You can read all this in chapter 10 of the RATE project’s technical report, on pages 758-765.

The RATE team’s ability to downplay the seriousness of this problem is astonishing. Despite their own admission of an impasse of extraordinary proportions, the project has been portrayed as a success, with conferences, books and videos hailing it as providing conclusive evidence for a young earth, and accelerated nuclear decay being presented in their rebuttals of radiometric dating as if it were a proven fact. Randy Isaac of the American Scientific Affiliation (the US equivalent of Christians in Science) describes this portrayal as dishonest, and it’s not hard to see why.

Scientists do not blindly assume that radioactive decay rates are constant. Evidence for this comes from several different directions, not least the numerous cross-checks that are regularly done between radiometric dating and non-radiometric methods such as ice cores, lake varves, tree rings, Milankovitch cycles, and a whole lot more.

In any case, accelerated nuclear decay is the kind of discovery that would win a Nobel Prize if it could be shown to have any merit. Claims of this nature need to be supported by an extraordinary amount of evidence. A single set of studies by a single team working with a predefined agenda, that have never been replicated or even published in a mainstream peer reviewed journal, simply isn’t anywhere near sufficient.

Conclusion

For anyone wishing to discuss science and faith, there are plenty of interesting points for discussion out there, such as the apparent fine tuning that’s evident in the universe. The fact too that there are many questions about physics to which we don’t have the answers should also give us pause for thought, and perhaps instil humility in us as we marvel at God’s creation. But we don’t do ourselves any favours by claiming that scientists don’t know things that they do, or that they are just making things up when they are not, or that they are always changing their minds when they are not, or that they can’t test their assumptions when they can. Nor do we do ourselves any favours by getting all gung-ho and rushing headlong into the debate with all guns blazing only to prove that we haven’t a clue what we are talking about. Proverbs 19:2 says, “It is not good to have zeal without knowledge, nor to be hasty and miss the way.” In our discussions about these matters, let us be wise as serpents and innocent as doves.

What should a .NET renaissance look like?

Aaron Stannard has an interesting blog post in which he talks about all the different ways in which the .NET scene has improved in the past few years. There’s certainly a lot going on in the Microsoft ecosystem to get .NET developers excited, and he mentions six areas in particular where this is evident:

  1. The decoupling of .NET from Windows
  2. The new-found focus on CLR performance
  3. Moving .NET’s tooling to a cross-platform model
  4. The .NET user base is embracing the OSS ecosystem as a whole
  5. The direction on .NET development is pushing users further down into the details of the stack
  6. Microsoft’s platform work being done out in the open.

Now these all look pretty exciting, but the litmus test of whether we are seeing a .NET renaissance is whether or not it can attract people who have “left .NET” back into the fold.

I have had little involvement in .NET myself over the past year, since I moved onto a team doing DevOps work on AWS for mostly LAMP-based projects a year or so ago. While I wouldn’t describe myself as having “left .NET” never to return, there is still one very important thing that needs to happen before I would consider it an attractive prospect to pick up that particular baton again.

The .NET community as a whole needs to provide evidence that it is becoming more open to options from beyond the Microsoft ecosystem.

When you move beyond the .NET ecosystem, one of the first things you find is that there is much more cross-flow between the different technology stacks. Developers are much more likely to be familiar with (or at least, willing to try out) other languages outside their usual ambit. Ruby developers won’t think twice about getting their hands dirty with Python, or Go, or Scala, or even C#, if the need arises. Any solution that gets a good enough reputation and meets your business need will be up for consideration — ElasticSearch, DataDog, Terraform, Consul, you name it. Different languages are mixed and matched — and all the more so with the increasing popularity of microservice-based architectures.

By contrast, for many years, most .NET developers have shown very little interest in anything beyond the Microsoft ecosystem. In fact, some of them have even regarded other technology stacks with suspicion if not outright hostility. There’s a widespread attitude in many .NET teams in many companies that unless something is included out of the box in Visual Studio, documented first and foremost on MSDN, promoted by Microsoft MVPs, and certified by Microsoft examinations, you’ve no business whatsoever paying the slightest bit of attention to it. If you’ve ever been told to do something a certain inefficient and cumbersome way for no reason other than That Is How Microsoft Wants You To Do It, or been given a funny look for suggesting you use Python for something, you’ll know exactly what I mean.

Nowhere was this more evident than in the Silverlight community. The reason why Silverlight died and HTML5 took over in its place was that browsers and platforms which were outside of Microsoft’s control — starting with the iPhone and the iPad — started blocking it. Yet Silverlight developers almost unanimously put the blame for Silverlight’s demise at Microsoft’s feet. The fact that there were decisions being made by other browser manufacturers that had to be considered didn’t even seem to enter their minds.

When your team has a healthy level of interaction with other parts of the software development community, you start to see many, many benefits. You learn from other people’s mistakes as well as your own. Your attention is drawn to solutions to problems that you didn’t realise were problems. You get an element of peer review for your best practices. You get a better idea of which tools and technologies are likely to stick around and which aren’t. On the other hand, with a paternalistic, spoon-fed attitude, you end up turning up late to the party and very often completely misunderstanding the processes and tools that are being suggested to you. It’s amazing to visit the ASP.NET architecture forum and see how many .NET developers still cling on to horrendous outdated “best practices” such as n-tier, business layers that don’t contain any business logic, or misguided and ultimately futile attempts to make Entity Framework swappable for unknown mystery alternatives.

There are of course many .NET teams that get these things right, and that do successfully engage with teams from elsewhere. But I’d like to see a whole culture shift right across the entire .NET ecosystem. I’d like to see it become commonplace and widespread for .NET teams to go beyond embracing just those bits and pieces from elsewhere that get Microsoft’s imprimatur, such as Git, or bash on Ubuntu on Windows, or Angular.js. I’d like to see a greater willingness to try tools such as make or grunt instead of MSBuild; Terraform instead of Azure Resource Manager; ElasticSearch/Logstash/Kibana instead of SCOM; and so forth. I’d like to see a much greater willingness to augment C# codebases with utilities and helpers written in Python, Ruby or Go where it makes sense to do so.

I’d like to see them fully embrace twelve factor apps, configuration settings in environment variables rather than the abomination that is web.config, container-based architecture, and immutable servers treated as cattle rather than pets. I’d like to see innovations in software development tooling and techniques getting adopted by the .NET community much faster than they have done up to now. You shouldn’t have to wait for Microsoft to take notice and give their imprimatur before you start using tools such as Git, Docker or Terraform, when everyone else has got there already.

Once we get to that point, we can truly say that we are seeing a .NET renaissance.

You probably only need a t2.nano instance for that

For those of you not familiar with AWS, t2.nano is the smallest size of EC2 instance (virtual servers that you can spin up and tear down at will) that they sell. It gives you one vCPU and 512MB of memory, and it costs just $0.0063 per hour, which, at current exchange rates with VAT added on top, works out at about £4.20 a month.

Its main limitation is CPU time. While it allows you to utilise 100% of the CPU for short periods of time, it limits you to 5% on average over 24 hours by a system of CPU credits. You get enough CPU credits for a maximum of 72 minutes of CPU time a day, and if you run out of credits, your CPU time is throttled to 5%. This means that you can’t use it for CPU-intensive tasks such as trying to compare human and chimp DNA, but even so, there are still plenty of things that you can use it for nonetheless. Here are some examples:

1. Bastion servers. A bastion is a server that acts as a single point of access for certain services such as SSH or remote desktop, to reduce your network’s attack surface area. If you’re doing things right, with immutable servers, you should only occasionally need to ssh into your servers, if at all. Or, in other words, if a t2.nano instance is too small for your bastion requirements, you’re Doing It Wrong.

2. A cheap-and-cheerful alternative to a NAT gateway. For the cost-conscious, the price of a NAT gateway — needed to let you connect to the Internet from any of your servers that doesn’t have its own public IP address — can come as a shock. $0.048/hour works out at $420/year — a lot of money if all you’re doing with it is downloading software updates every once in a while. But it’s fairly easy to configure an Ubuntu instance as an alternative — and a t2.nano instance works out at a seventh of the price.

3. A source control, CI server or wiki for small teams. A t2.nano instance should easily be sufficient to act as a Gitea or Jenkins server for a Scrum team of about 5-10 people, possibly more. Note however that Go.CD and GitLab both require 1GB of memory, so those options will require larger instance sizes.

4. Low-traffic blogs, personal portfolio websites and the like. A t2.nano instance can handle hundreds of monthly visitors to your average personal website. Additionally, by putting it behind a CDN such as CloudFront or CloudFlare, you can get even more bang for your buck out of it and possibly scale into the thousands or beyond.

There may well be other cases where a t2.nano instance works fine. You should take a look at your metrics, and if your CPU usage on your larger instance is constantly low, you may well benefit from trying the smaller size. One point to note is that the t2.nano instance is not eligible for AWS’s free tier; only the t2.micro instance is available free of charge. However, if your free tier allowance has expired, or if you have maxed it out, do consider the t2.nano as a viable option. You can probably do a lot more with it than you think.