Compared to other Big Tech companies, Apple has been the poster child of privacy protection.
Unfortunately, a recent announcement has breached that trust.
How much power do we want Big Tech to have? And what sort of society do we want?
Apple has built a reputation as the "least evil" Big Tech giant when it comes to privacy. All these companies â Google, Facebook, Apple, Microsoft, Amazon â collect our data and essentially spy on us in a multitude of ways. Apple, however, has cultivated a reputation as by far the least invasive of our mainstream technology options. Their recent dramatic reversal on this issue has caused an uproar. What is going on, and what should we do about it?
Now, the bad news. All the major cloud storage providers have for some time been quietly scanning everything you upload. That is no conspiracy theory: you agree to it when you accept their privacy policies. They do this for advertising (which is what is meant by the phrase "to help us improve our services" in the user agreement). They also look for and report illegal activity. A big part of monitoring for criminal activity is looking for "CSAM," a polite acronym for something horrific. In August, Apple announced that it would take a drastic step further and push a software update for iPhones that would scan and analyze all of the images on your iPhone, looking not just for known CSAM but for any images that a computer algorithm judges to be CSAM.
There are two enormous red flags here. First, the software does not operate on Apple's cloud servers, where you are free to choose whether to park your data and allow Apple to scan it for various purposes. The scanning is performed on your phone, and it would scan every picture on your phone, looking for content that matches a database of bad images.
Image recognition tech is still bad
Why is this a problem for people who do not keep illegal images on their phones? The second red flag is that the software does not look for a specific file â horrifying_image.jpg â and ignore all of your personal photos. Rather, it uses what Apple calls "NeuralHash," a piece of computer code that looks for features and patterns in images. You can read their own description here.
Computer image recognition is much better than it once was. Despite the hype surrounding it the past few years, however, it is still extremely fallible. There are many ways that computer image recognition can be baffled by tricks that are not sophisticated enough to fool toddlers. This fascinating research paper covers just one of them. It finds that a 99 percent confident (and correct) image identification of a submarine can be made into a 99 percent confident (but wrong) identification of a bonnet by adding a tiny area of static noise to one corner of the image.
Let's imagine for a moment that this algorithm never mistakes baby bath pictures or submarines or cats or dots for illicit material. This could make things worse. How?
The NeuralHash algorithm is trained by scanning examples of the target images it seeks. This collection of training material is secret from the public. We do not know whether all the material in the database is CSAM or if it also includes things such as: political or religious images, geographic locations, anti-government statements, mis-information according to whoever has the power to define it, material potentially embarrassing to politicians, or whistleblowing documents against powerful authorities. There are many things that tech companies, federal agencies, or autocratic regimes around the world would love to know if you have on your phone. The possibilities are chilling.
It would be very easy for Apple to wait out the uproar and then quietly go ahead with the plan a few months from now. The other tech giants likely would follow suit. But remember that Big Tech already tracks our movements and records our private conversations. If the public does not stay vigilant, Big Tech can keep invading what most of us consider to be our private lives. How much more power over us do we want Big Tech to have? And is this the sort of society that we want?
The Einstein field equations appear very simple, but they encode a tremendous amount of complexity.
What looks like one compact equation is actually 16 complicated ones, relating the curvature of spacetime to the matter and energy in the universe.
It showcases how gravity is fundamentally different from all the other forces, and yet in many ways, it is the only one we can wrap our heads around.
Although Einstein is a legendary figure in science for a large number of reasons â E = mc², the photoelectric effect, and the notion that the speed of light is a constant for everyone â his most enduring discovery is also the least understood: his theory of gravitation, general relativity. Before Einstein, we thought of gravitation in Newtonian terms: that everything in the universe that has a mass instantaneously attracts every other mass, dependent on the value of their masses, the gravitational constant, and the square of the distance between them. But Einstein's conception was entirely different, based on the idea that space and time were unified into a fabric, spacetime, and that the curvature of spacetime told not only matter but also energy how to move within it.
This fundamental idea â that matter and energy tells spacetime how to curve, and that curved spacetime, in turn, tells matter and energy how to move â represented a revolutionary new view of the universe. Put forth in 1915 by Einstein and validated four years later during a total solar eclipse â when the bending of starlight coming from light sources behind the sun agreed with Einstein's predictions and not Newton's â general relativity has passed every observational and experimental test we have ever concocted. Yet despite its success over more than 100 years, almost no one understands what the one equation that governs general relativity is actually about. Here, in plain English, is what it truly means.
Einstein's original equation relates spacetime curvature to the stress-energy of a system (top). A cosmological constant term can be added (middle), or equivalently, it can be formulated as dark energy (bottom), another form of energy density contributing to the stress-energy tensor.Credit: Š 2014 University of Tokyo; Kavli IPMU
This equation looks pretty simple, in that there are only a few symbols present. But it's quite complex.
The first one, GΟν, is known as the Einstein tensor and represents the curvature of space.
The second one, Î, is the cosmological constant: an amount of energy, positive or negative, that is inherent to the fabric of space itself.
The third term, gΟν, is known as the metric, which mathematically encodes the properties of every point within spacetime.
The fourth term, 8ĎG/c4, is just a product of constants and is known as Einstein's gravitational constant, the counterpart of Newton's gravitational constant (G) that most of us are more familiar with.
The fifth term, TΟν, is known as the stress-energy tensor, and it describes the local (in the nearby vicinity) energy, momentum, and stress within that spacetime.
These five terms, all related to one another through what we call the Einstein field equations, are enough to relate the geometry of spacetime to all the matter and energy within it: the hallmark of general relativity.
A mural of the Einstein field equations, with an illustration of light bending around the eclipsed sun, the observations that first validated general relativity back in 1919. The Einstein tensor is shown decomposed, at left, into the Ricci tensor and Ricci scalar. Credit: Vysotsky / Wikimedia Commons
You might be wondering what is with all those subscripts â those weird "Ον" combinations of Greek letters you see at the bottom of the Einstein tensor, the metric, and the stress-energy tensor. Most often, when we write down an equation, we are writing down a scalar equation, that is, an equation that only represents a single equality, where the sum of everything on the left-hand side equals everything on the right. But we can also write down systems of equations and represent them with a single simple formulation that encodes these relationships.
E = mc² is a scalar equation because energy (E), mass (m), and the speed of light (c) all have only single, unique values. But Newton's F = ma is not a single equation but rather three separate equations: Fx = max for the "x" direction, Fy = may for the "y" direction, and Fz = maz for the "z" direction. In general relativity, the fact that we have four dimensions (three space and one time) as well as two subscripts, which physicists know as indices, means that there is not one equation, nor even three or four. Instead, we have each of the four dimensions (t, x, y, z) affecting each of the other four (t, x, y, z), for a total of 4 à 4, or 16, equations.
Instead of an empty, blank, three-dimensional grid, putting a mass down causes what would have been "straight" lines to instead become curved by a specific amount. In general relativity, space and time are continuous, with all forms of energy contributing to spacetime's curvature. Credit: Christopher Vitale of Networkologies and The Pratt Institute
Why would we need so many equations just to describe gravitation, whereas Newton only needed one?
Because geometry is a complicated beast, because we are working in four dimensions, and because what happens in one dimension, or even in one location, can propagate outward and affect every location in the universe, if only you allow enough time to pass. Our universe, with three spatial dimensions and one time dimension, means the geometry of our universe can be mathematically treated as a four-dimensional manifold.
In Riemannian geometry, where manifolds are not required to be straight and rigid but can be arbitrarily curved, you can break that curvature up into two parts: parts that distort the volume of an object and parts that distort the shape of an object. The "Ricci" part is volume distorting, and that plays a role in the Einstein tensor, as the Einstein tensor is made up of the Ricci tensor and the Ricci scalar, with some constants and the metric thrown in. The "Weyl" part is shape distorting, and, counterintuitively enough, plays no role in the Einstein field equations.
The Einstein field equations are not just one equation, then, but rather a suite of 16 different equations: one for each of the "4 Ă 4" combinations. As one component or aspect of the universe changes, such as the spatial curvature at any point or in any direction, every other component as well may change in response. This framework, in many ways, takes the concept of a differential equation to the next level.
A differential equation is any equation where you can do the following:
you can provide the initial conditions of your system, such as what is present, where, and when it is, and how it is moving,
then you can plug those conditions into your differential equation,
and the equation will tell you how those things evolve in time, moving forward to the next instant,
where you can plug that information back into the differential equation, where it will then tell you what happens subsequently, in the next instant.
It is a tremendously powerful framework and is the very reason why Newton needed to invent calculus in order for things like motion and gravitation to become understandable scientific fields.
When you put down even a single point mass in spacetime, you curve the fabric of spacetime everywhere as a result. The Einstein field equations allow you to relate spacetime curvature to matter and energy, in principle, for any distribution you choose.Credit: JohnsonMartin / Pixabay
Only, when we begin dealing with general relativity, it is not just one equation or even a series of independent equations that all propagate and evolve in their own dimension. Instead, because what happens in one direction or dimension affects all the others, we have 16 coupled, interdependent equations, and as objects move and accelerate through spacetime, the stress-energy changes and so does the spatial curvature.
However, these "16 equations" are not entirely unique! First off, the Einstein tensor is symmetric, which means that there is a relationship between every component that couples one direction to another. In particular, if your four coordinates for time and space are (t, x, y, z), then:
the "tx" component will be equivalent to the "xt" component,
the "ty" component will be equivalent to the "yt" component,
the "tz" component will be equivalent to the "zt" component,
the "yx" component will be equivalent to the "xy" component,
the "zx" component will be equivalent to the "xz" component,
and the "zy" component will be equivalent to the "yz" component.
All of a sudden, there aren't 16 unique equations but only 10.
Additionally, there are four relationships that tie the curvature of these different dimensions together: the Bianchi Identities. Of the 10 unique equations remaining, only six are independent, as these four relationships bring the total number of independent variables down further. The power of this part allows us the freedom to choose whatever coordinate system we like, which is literally the power of relativity: every observer, regardless of their position or motion, sees the same laws of physics, such as the same rules for general relativity.
An illustration of gravitational lensing and the bending of starlight due to mass. The curvature of space can be so severe that light can follow multiple paths from one point to another.Credit: NASA / STScI
There are other properties of this set of equations that are tremendously important. In particular, if you take the divergence of the stress-energy tensor, you always, always get zero, not just overall, but for each individual component. That means that you have four symmetries: no divergence in the time dimension or any of the space dimensions, and every time you have a symmetry in physics, you also have a conserved quantity.
In general relativity, those conserved quantities translate into energy (for the time dimension), as well as momentum in the x, y, and z directions (for the spatial dimensions). Just like that, at least locally in your nearby vicinity, both energy and momentum are conserved for individual systems. Even though it is impossible to define things like "global energy" overall in general relativity, for any local system within general relativity, both energy and momentum remain conserved at all times; it is a requirement of the theory.
As masses move through spacetime relative to one another, they cause the emission of gravitational waves: ripples through the fabric of space itself. These ripples are mathematically encoded in the Metric Tensor. Credit: ESO / L. Calçada
Another property of general relativity that is different from most other physical theories is that general relativity, as a theory, is nonlinear. If you have a solution to your theory, such as "what spacetime is like when I put a single, point mass down," you would be tempted to make a statement like, "If I put two point masses down, then I can combine the solution for mass #1 and mass #2 and get another solution: the solution for both masses combined."
That is true, but only if you have a linear theory. Newtonian gravity is a linear theory: the gravitational field is the gravitational field of every object added together and superimposed atop one another. Maxwell's electromagnetism is similar: the electromagnetic field of two charges, two currents, or a charge and a current can all be calculated individually and added together to give the net electromagnetic field. This is even true in quantum mechanics, as the SchrĂśdinger equation is linear (in the wavefunction), too.
But Einstein's equations are nonlinear, which means you cannot do that. If you know the spacetime curvature for a single point mass, and then you put down a second point mass and ask, "How is spacetime curved now?" we cannot write down an exact solution. In fact, even today, more than 100 years after general relativity was first put forth, there are still only about ~20 exact solutions known in relativity, and a spacetime with two point masses in it still is not one of them.
A photo of Ethan Siegel at the American Astronomical Society's hyperwall in 2017, along with the first Friedmann equation at right â what is occasionally known as the most important equation in the universe and one of the rare exact solutions in general relativity. Credit: Harley Thronson / Perimeter Institute)
Originally, Einstein formulated general relativity with only the first and last terms in the equations, that is, with the Einstein tensor on one side and the stress-energy tensor (multiplied by the Einstein gravitational constant) on the other side. He only added in the cosmological constant, at least according to legend, because he could not stomach the consequences of a universe that was compelled to either expand or contract.
And yet, the cosmological constant itself would have been a revolutionary addition even if nature turned out not to have a non-zero one (in the form of today's dark energy) for a simple but fascinating reason. A cosmological constant, mathematically, is literally the only "extra" thing you can add into general relativity without fundamentally changing the nature of the relationship between matter and energy and the curvature of spacetime.
The heart of general relativity, however, is not the cosmological constant, which is simply one particular type of "energy" you can add in but rather the other two more general terms. The Einstein tensor, GΟν, tells us what the curvature of space is, and it is related to the stress-energy tensor, TΟν, which tells us how the matter and energy within the universe is distributed.
Quantum gravity tries to combine Einstein's General theory of Relativity with quantum mechanics. Quantum corrections to classical gravity are visualized as loop diagrams, as the one shown here in white.Credit: SLAC National Accelerator Lab
In our universe, we almost always make approximations. If we ignored 15 out of the 16 Einstein equations and simply kept the "energy" component, you would recover the theory it superseded: Newton's law of gravitation. If you instead made the universe symmetric in all spatial dimensions and did not allow it to rotate, you get an isotropic and homogeneous universe, one governed by the Friedmann equations (and hence required to expand or contract). On the largest cosmic scales, this actually seems to describe the universe in which we live.
But you are also allowed to put in any distribution of matter and energy, as well as any collection of fields and particles that you like, and if you can write it down, Einstein's equations will relate the geometry of your spacetime to how the universe itself is curved to the stress-energy tensor, which is the distribution of energy, momentum, and stress.
If there actually is a "theory of everything" that describes both gravity and the quantum universe, the fundamental differences between these conceptions, including the fundamentally nonlinear nature of Einstein's theory, will need to be addressed. As it stands, given their vastly dissimilar properties, the unification of gravity with the other quantum forces remains one of the most ambitious dreams in all of theoretical physics.
Plato wrote profusely, and his ideas are intelligent, well argued, and powerful.
His works form the backbone of so many subjects: epistemology, aesthetics, metaphysics, politics, and psychology.
Plato also influenced Christianity, which in turn became a new kind of religion altogether.
Nothing in life can be treated in isolation. Behind every idea, person, discovery, invention, or project is a hidden network of conditions that gave rise to it. This is never truer than in academia. As Isaac Newton famously said, we are all just "standing on the shoulders of Giants."
Philosophy is the same. Almost all its notable thinkers read, debated, and bounced ideas around with their contemporaries. Aristotle was a response to (and taught by) Plato, Chinese legalism was a critique of Confucianism, David Hume and Adam Smith were close friends, Voltaire and Jean-Jacques Rousseau constantly attacked each other, and Thomas Hobbes was in regular correspondence with RenĂŠ Descartes.
So, it is hard to answer the question: who was the most original philosopher? But that doesn't mean we aren't going to try.
The trunk of the tree
Generally every philosophical issue (in the West, anyway) is prefaced with the line, "It all began with the ancient Greeks." Of these seminal thinkers, Plato is typically considered the foremost. There is an oft-quoted line from A.N. Whitehouse that reads, "The safest general characterization of the European philosophical tradition is that it consists of a series of footnotes to Plato".
No doubt, there is some truth to this. Plato wrote profusely, and in both his dialogues and Republic we find the foundations of political philosophy, epistemology, metaphysics, and aesthetics. He was a psychologist before the term even existed: his tripartite division of the soul into Eros (desire), Thumos (spirit or passion), and Logos (rationality) tracks almost perfectly onto Freud's Id, Superego, and Ego.
Importantly, he defined the rules of the philosophical game, in which dialogue, debate, dialectic, and rational sparring are the way to do philosophy. Today, we assume that good arguments must be logical, and that most people, most of the time, want to discover the Truth (with a capital T) of the universe. This all comes from Plato. (It is difficult to find a similar sentiment in Eastern traditions.)
Let me write that down
There is only one problem: it is difficult to say how strictly original Plato was and how much was already kicking around in the ideological zeitgeist of the Peloponnese. All of Plato's dialogues contain a fictionalized version of his master and friend, Socrates, who is almost always the wisest character and the winner of debates. Socrates never wrote anything himself down (and in fact seems to have been opposed to this new-fangled "writing" the kids were up to), so we are left guessing at how much of what we call Plato's was actually from his master. It could be all; it could be none.
Additionally, Plato alludes to other long lost philosophers, not least Diotima, who is thought to be the first female philosopher and even the teacher of Socrates. So many of these "pre-Socratics" did write, but their work is largely lost, so we have to rely again on Plato and later sources for what they wrote. (The most important and treasured of these is Lives and Opinions of Eminent Philosophers by Diogenes LaĂŤrtius.)
However, with the dearth of evidence, we are forced to give Plato his due â even if it is just being the first to write stuff down.
How Plato influenced Christianity
If Western philosophy and the manner in which it is done is merely a "footnote to Plato," then it is not a stretch to say that Plato's ideas lurk in the background of nearly every philosopher that we have read. Thinkers like Descartes, Nietzsche, and Freud were either responding or adding to Plato's ideas.
Arguably more important even than this is how far Platonism influenced Christianity, the largest religion on Earth. The early Church Fathers who formulated the theology and official dogma of the Church were steeped in the knowledge of both Jewish tradition and Greek philosophy, the latter being all but dominated by Plato and the descendants of his school, The Academy.
Plato's ideas of a world of forms â which was some perfect and removed ideal from our corrupt, base world â worked its way into formal Christian doctrine. Many ideas about sins of the flesh and weak mortal bodies were influenced by Plato. In his famous allegory of the cave, Plato argued that we ought not to indulge our worldly whims and desires (Eros) but contemplate and philosophize instead (Logos). All of these ideas tracked perfectly onto the fledgling Church. In fact, John's Gospel opens with the verse: "In the beginning was the Logos, and the Logos was with God, and the Logos was God."
With us still
In the ways that Plato came to define Christianity we have, again, an entirely new way of doing philosophy â or, in this case, theology. Christianity is an original kind of faith that was half Judea, half Athens.
Plato dominated the Western tradition for centuries, and we still live with his legacy of valuing the intellect and rationality over our earthly lusts. To be called "irrational" is still a bad thing. Even though the likes of Aristotle creep into Christian theology via Thomas Aquinas in the 13th century and theologians like Augustine, Irenaeus, and Origen have their own impact, none ever leave the same (unique) depth of mark as the rationalistic and original ideas of Plato.
We assume that physical constants do not change from time to time or location to location.
Measurements aimed at calculating the fine-structure constant, however, challenge this assumption.
A big puzzle remains unsolved to this day: why do quasars appear to show small but significant differences in the inferred value of the fine-structure constant?
Whenever we examine the universe in a scientific manner, there are a few assumptions that we take for granted as we go about our investigations. We assume that the measurements that register on our devices correspond to physical properties of the system that we are observing. We assume that the fundamental properties, laws, and constants associated with the material universe do not spontaneously change from moment to moment. And we also assume, for many compelling reasons, that although the environment may vary from location to location, the rules that govern the universe always remain the same.
But every assumption, no matter how well-grounded it may be or how justified we believe we are in making it, has to be subject to challenge and scrutiny. Assuming that atoms behave the same everywhere â at all times and in all places â is reasonable, but unless the universe supports that assumption with convincing, high-precision evidence, we are compelled to question any and all assumptions. If the fundamental constants are identical at all times and places, the universe should show us that atoms behave the same everywhere we look. But do they? Depending on how you ask the question, you might not like the answer. Here is the story behind the fine-structure constant, and why it might not be constant, after all.
A number of fundamental constants, as reported by the Particle Data Group in 1986. Although many advances have occurred in the intervening 35 years, the values of these constants have changed very little, with the largest difference being a slight but significant increase in the precisions of these Credit: Particle Data Group / LBL / DOE / NSF
When most people hear the idea of a fundamental constant, they think about the constants of nature that are inherent to our reality. Things like the speed of light, the gravitational constant, or Planck's constant (the fundamental constant of the quantum universe) are often the first things we think of, along with the masses of the various indivisible particles in the universe. In physics, however, these are what we call "dimensionful" constants, which means that they rely on our definitions of quantities like mass, length, or time.
An alternative way to conceive of these constants is to make them dimensionless instead: so that arbitrary definitions like kilogram, meter, or second make no difference to the constant. In this conception, each quantum interaction has a coupling strength associated with it, and the coupling of the electromagnetic interaction is known as the fine-structure constant and is denoted by the symbol alpha (Îą). Fascinatingly enough, its effects were detected before quantum physics was even remotely understood, and remained wholly unexplained for nearly 30 years.
The Michelson interferometer (top) showed a negligible shift in light patterns (bottom, solid) as compared with what was expected if Galilean relativity were true (bottom, dotted). The speed of light was the same no matter which direction the interferometer was oriented.Credit: Albert A. Michelson (1881); A.A. Michelson and E. Morley (1887)
In 1887, arguably the greatest null result in the history of physics was obtained, via the Michelson-Morley experiment. The experiment was brilliant in conception, seeking to measure the speed of Earth through the "rest frame" of the universe by:
sending light beams in perpendicular directions,
bringing them back together,
thereby constructing an interference pattern,
and measuring how that pattern shifted as the experimental apparatus was rotated.
Michelson originally performed a version of this experiment by himself back in 1881, detecting no effect but recognizing the need to improve the experiment's precision.
Six years later, the Michelson-Morley experiment represented an improvement by more than a factor of ten, making it the most precise electromagnetic measuring device at the time. While again, no shift was detected, demonstrating no need for the hypothesized aether, the apparatus they developed was also spectacular for measuring the spectrum of light emitted by various atoms. Puzzlingly, where a single emission line was expected to occur at a specific wavelength, sometimes there was just a single line, but at other times there were a series of narrowly-spaced emission lines, providing empirical evidence (but without a theoretical motivation) for a finer-than-expected structure to atoms.
In the Bohr model of the hydrogen atom, only the orbiting angular momentum of the point-like electron contributes to the energy levels. Adding in relativistic effects and spin effects not only causes a shift in these energy levels, but causes degenerate levels to split into multiple states, revealinCredit: RĂŠgis Lachaume and Pieter Kuiper / Public domain
What is actually happening became clearer with the development of modern quantum mechanics. Electrons orbit around the atomic nucleus in fixed, quantized energy levels only, and it is known that they can occupy different orbitals, which correspond to different values of orbital angular momentum. These are required to balance by both relativity and quantum physics. First derived by Arnold Sommerfeld in 1916, it was recognized that these narrowly-spaced lines were an example of splitting due to the fine-structure of atoms, with hyperfine structure from electron/nucleon interactions discovered shortly thereafter.
Today, we understand the fine-structure constant in the context of quantum field theory, where it is the probability of an interacting particle having what we call a radiative correction: emitting or absorbing an electromagnetic quantum (that is, a photon) during an interaction. We typically measure the fine-structure constant, Îą, at today's negligibly low energies, where it has a value that is equal to 1/137.0359991, with an uncertainty of ~1 in the final digit. It is defined as a dimensionless combination of dimensionful physical constants: the elementary charge squared divided by Planck's constant and the speed of light, and the value we measure today is consistent across all sufficiently precise experiments.
In quantum electrodynamics, higher-order loop diagrams contribute progressively smaller and smaller effects. However, as the energy increases, these higher-order processes become more efficient, and thus the value of the fine-structure constant increases with energy.Credit: American Physical Society, 2012
At high energies in particle physics experiments, however, we notice that the value of Îą gets stronger at higher energies. As the energy of the interacting particle(s) increases, so does the strength of the electromagnetic interaction. When the universe was very, very hot â such as at energies achieved just ~1 nanosecond after the Big Bang â the value of Îą was more like 1/128, as particles like the Z-boson, which can only exist virtually at today's low energies, can more easily be physically "real" at higher energies. The interaction strength is expected to scale with energy, an instance where our theoretical predictions and our experimental measurements match up remarkably well.
However, there is an entirely different way to measure the fine-structure constant at today's low energies: by measuring spectral lines, or emission and absorption features, from distant light sources throughout the cosmos. As background light from a source strikes the intervening matter, some portion of that light is absorbed at specific wavelengths. The exact wavelengths that are observed depend on a number of factors, such as the redshift of the source but also on the value of the fine-structure constant.
The light from ultra-distant quasars provide cosmic laboratories for measuring the gas clouds they encounter along the way, with exact properties of those absorption lines revealing the fine structure constant's value.Credit: Ed Janssen / ESO
If there are any variations in Îą, either over time or directionally in space, a careful examination of spectral features from a wide variety of astrophysical sources, particularly if they span many billions of years in time (or billions of light-years in distance), could reveal those variations. The most straightforward way to look for these variations is through quasar absorption spectroscopy: where the light quasars, the brightest individual sources in the universe, encounter every intervening cloud of matter that exists between the emitter (the quasar itself) and the observer (us, here on Earth).
There are very intricate, precise energy levels that exist for both normal hydrogen (with an electron bound to a proton) and its heavy isotope deuterium (with an electron bound to a deuteron, which contains both a proton and a neutron), and these energy levels are just slightly different from one another. If you can measure the spectra of these different quasars and look for these precise, very-slightly-different fine and hyperfine transitions, you would be able to measure Îą at the location of the quasar.
Narrow-line absorption spectra allow us to test whether constants vary by looking at variations in line placements. Large numbers of systems investigated for fine and hyperfine splitting can reveal if there's an overall varying effect.Credit: M. T. Murphy, J. K. Webb, V. V. Flambaum, and S. J. Curran
If the laws of physics were the same everywhere throughout the universe, then based on the observed properties of these lines, which includes:
the same wavelengths and frequencies,
the same ratios between transitions within atoms,
and the same sets of absorption features across a wide variety of distances,
you would expect to be able to infer the same value of Îą everywhere. The only difference you would anticipate would be redshift-dependent, where all the wavelengths for a specific absorber would be systematically shifted by the same redshift-dependent factor.
Yet, that is not what we see. Everywhere we look in the universe â at every quasar and every example of fine or hyperfine structure in the intervening, absorptive gas clouds â we see that there are tiny, minuscule, but non-negligible shifts in those transition ratios. At the level of a few parts-per-million, the value of the fine-structure constant, Îą, appears to observationally vary. What is remarkable is that this variation was not expected or anticipated but has robustly shown up, over and over again, in quasar absorption studies going all the way back to 1999.
Spatial variations in the fine-structure constant are inferred from quasar absorption data. Unfortunately, these individual variations between systems are significantly larger than any overall variation seen in space or time, casting severe doubt on those conclusions.Credit: J.K. Webb et al., Phys. Rev. Lett. 107, 191101 (2011)
Beginning in 1999, a team of astronomers led by Australian astrophysicist John K. Webb started seeing evidence that Îą was different from different astronomical measurements. Using the Keck telescopes and over 100 quasars, they found that Îą was smaller in the past and had risen by approximately 6 parts-per-billion over the past ~10 billion years. Other groups were unable to verify this, however, with complementary observations from the Very Large Telescope showing the exact opposite effect: that the fine-structure constant, Îą, was larger in the past, and has been slowly decreasing ever since.
Subsequently, Webb's team obtained more data with greater numbers of quasars, spanning larger fractions of the sky and cutting across cosmic time. A simple time-variation was no longer consistent with the data, as variations were inconsistent from place-to-place and did not scale directly with either redshift or direction. Overall, there were some places where Îą appeared larger than average and others where it appeared smaller, but there was no overall pattern. Even with the latest 2021 data, the few-parts-in-a-million variations that are seen are inconclusive.
Variations in the fine-structure constant across a wide variety of quasar systems, sorted by redshift. This latest work leverages four separate systems at high redshift, but sees no net evidence for a time-variation in the constant itself.Credit: M.R. Wilczynska et al., Sci Adv. 2020 Apr; 6(17): eaay9672
It is often said that "extraordinary claims require extraordinary evidence," but the uncertainties associated with each of these measurements were at least as large as the suspected signal itself: a few parts-per-million. In 2018, however, a remarkable study â even though it was only of one system â had the right confluence of properties to be able to measure Îą, at a distance of 3.3 billion light-years away, to a precision of just ~1 part-per-million.
Instead of looking at hydrogen and deuterium, isotopes of the same element with the same nuclear charges but different nuclear masses, researchers using the Arecibo telescope in one of its last major discoveries found two absorption lines of a hydroxyl (OH-) ion: at 1720 and 1612 megahertz in frequency around a rare and peculiar blazar. These absorption lines have different dependencies on the fine-structure constant, Îą, as well as the proton-to-electron mass ratio, and yet these measurements combine to show a null result: consistent with no variation over the past ~3 billion years. These are, to date, the most stringent constraints on tiny changes in the fine-structure constant's value from astronomy, consistent with no effect at all.
The Arecibo radio telescope as viewed from above. The 1000 foot (305 m) diameter was the largest single-dish telescope from 1963 until 2016, and leaves behind a legacy of tremendous scientific discovery.Credit: H. Schweiker/Wiyn and NOAO/Aura/NSF
The observational techniques that have been pioneered in quasar absorption spectroscopy have allowed us to measure these atomic profiles to unprecedented precision, creating a puzzle that remains unsolved to this day: why do quasars appear to show small but significant differences in the inferred value of the fine-structure constant between them? We know there has been no significant variation over the past ~3 billion years, from not only astronomy but from the Oklo natural nuclear reactor as well. In addition, the value is not changing today to 17 decimal places, as constrained by atomic clocks.
It remains possible that the fundamental constants did actually vary a long time ago, or that they varied differently in different locations in space. To untangle whether that is the case or not, however, we first have to understand what is causing the observed variations in quasar absorption lines, and that remains an unsolved puzzle that could just as easily be due to an unidentified error as it is to a physical cause. Until there is a confluence of evidence, where many disparate observations all come together to point to the same consistent conclusion, the default assumption must remain that the fundamental constants really are constant.
Researchers observe in a series of photos an unexpected rescue of two young wild boars from a trap.
The whole rescue took less than half an hour thanks to a clever adult female wild boar.
Aside from the fact of the rescue, there are signs that the rescuer was exhibiting and acting out of empathy for the captives.
There is a danger in attributing human-like motivations to animal behavior. We have no way, after all, of really knowing what is going on in a non-human's mind. Controlled experiments can sometimes strongly suggest intent, but it is difficult to be sure. Every now and then, though, there is just no escaping the obvious.
One such case is reported in a new study by a team of scientists from the Czech University of Life Sciences at the VodÄradskĂŠ BuÄiny National Nature Reserve. The team was actually researching African swine fever protection measures until their motion-triggered camera caught something amazing.
The researchers observed a female adult wild boar coming to the quick rescue of two young boars caught in a trap. The adult boar's response was quick, and it was smart. If its actions were not enough to convince an observer of prosocial behavior, it is difficult to interpret its signs of distress during the rescue as anything but empathy for the terrified captives.
What counts as a rescue?
According to the study, to qualify as a deliberate rescue, four things must be true:
The captive has to be in distress.
The rescuer must put himself or herself in harm's way to make the rescue.
The rescuer's actions must amount to an effective solution, even if unsuccessful.
The rescuer derives no immediate benefit "in terms of food rewards, social contact, protection, or mating opportunities."
The rescue occurred just before and after midnight on the morning of January 29, 2020. The camera captured 93 photos.
The box trap had two sides held open by a wire. When the wire is tripped by an animal inside, the walls swing down into place and are held by logs that roll down from the top of the box. Essentially, the box is "locked" shut.
The researchers had set their box trap using corn as bait, and two young wild boars fell for the lure. Two hours and six minutes later, four other boars were seen wandering around the front and back of the trap for about four minutes, after which they left.
A couple of hours later, around 11 pm, a rescue party of at least eight boars led by a female adult appeared in the photos. Once underway, the entire rescue took just 29 minutes, with the first log removed after only six minutes.
The inescapable conclusion, judging by the speed of the jailbreak, is that the rescue team â particularly the lead female â was clever enough to understand what was locking the captives in. The adult female kept charging the logs until they were dislodged. (See the headline image.) Once the logs were removed, the young boars pushed through and out.
Credit: Masilkova, et al., Scientific Reports, 2021. CC 4.0
Prosocial indicators
Aside from the obvious fact of the situation â that the adult boar cared enough about the victims' welfare to effect a rescue, meeting all the requisite criteria above â a physiological clue confirms it.
When wild boars become distressed, they exhibit piloerection. Essentially, the hairs on their manes (that is, the backs of their necks) stand up on edge. The photos reveal that the adult female's mane was clearly showing piloerection, revealing that she was viscerally distressed by the captives' plight.
Daoism is the philosophy that there is a right way to live life, and it involves finding and following the "Dao", or path, to our life and also the universe.
Yin-Yang is the symbol that represents difference yet unity in life. It is not a conflict or struggle but shows that nothing in life is solely either this or that.
When things in life feel wrong, or if you get that gut feeling that you are on the wrong path, Daoism offers advice about how to get things straight.
No person is one thing. The kindest person you know has a tiny recess of cruelty in them. The happiest person you have ever met will have their depressive moments. The gentlest person you can think of can be filled with rage by one particular thing. There is no purity of any kind; life is a messy cocktail of things.
This is the truth behind one of the most famous symbols (and tattoos) in the world: the Yin and Yang.
Well en-Dao-ed wisdom
For such a well known idea, the Yin and Yang only appears in one line of the central Daoist book, Daodejing. And yet, it is essential to Daoism and is, in many ways, interchangeable with the Dao itself.
Lao Tzu is the semi-mythical founder of Daoism (or Taoism â the sound is halfway between a T and a D to the non-Chinese ear). His name means "Old Master,'' and it is unclear if he was a single historical person or a title given to a collection of sages and their works. But what matters is Lao Tzu's influence, not least for the 20 million Daoists worldwide.
The Dao translates as "The Way" and is often compared to the flow of a river. Like a river, the Dao moves and directs all things, and we are like boats floating along its path. To be happy is to let the Dao carry us on. To row against the current is hard, and Daoism is the simple call to "go with the flow" of the universe.
Daoism is to find the harmony in life. This is to let the self mold to the world, like the way water fills a cup. It is to adapt, compromise, and take life as it comes, not as you want to force it. If your life is a forest, the Dao is the wide, paved, and easy path. This is not to say that there are not other paths (such as the "human way"), but why struggle through thorns and thickets when life could be happy and easy? Daodejing is a dense wonder of proverbs, advice, wisdom, and fables to guide the Daoist in finding this path.
Yin-Yang, then, is a guide to that path. It is a hint and a signpost about what the Dao looks like. In short, Yin-Yang is the idea that there is a duality to everything. But rather than this being some kind of oppositional or destructive conflict between two rivals, the Yin-Yang argues that there is a great harmony to be found in the contrast between things. The symbol does not feature a fully black side set against a fully white side. The white has a bit of black, and the black a bit of white. Contrast, yet harmony.
Yin is associated with darkness, femininity, mystery, passivity, the night sky, or the old. Yang is associated with lightness, energy, activity, clarity, the sun, or youth.
But neither Yin nor Yang are superior in any way. They are both utterly amoral, in that neither is "right" or "wrong." While the Yin is associated with the negative, this does not come attached with a value judgement but is better thought of as the negative terminal of a battery, perhaps. Right living comes not from being either one thing or another but in finding that balance â the Dao not only to our life but to all existence. It is the feeling that we have found our right path.
And to do this, both Yin and Yang are essential. The symbol expresses the idea that balance and harmony are necessary for all things. In the martial arts, for instance, it is important that we be hard, strong, and fit (Yang), but these are nothing without being calm, focused, and adaptable (Yin). In a relationship, we can party and laugh (Yang), but we must also cry and share secrets (Yin).
The tightrope of life
Sometimes, things just feel wrong. It might be a relationship, a career, or even a new book or TV show. It is as if everything is a slog, where you have to put in an inordinate amount of effort just to keep moving. It can feel almost as if obstacles constantly pop up to block you.
It is precisely this feeling that Daoism takes on. This kind of struggle is a sure sign that you have fallen from The Way. Life ought not feel like this. It means something is wrong.
Daoism generally, and the Yin-Yang specifically, is about harmony and balance. Things go wrong when we tip the scales too far one way. Daoists are neither ascetics nor bibulous gluttons, as both involve straying from the middle way. The wisdom of the Yin-Yang is to see how a world without light would be hellish but so too would one of constant day. The symbol has proved so powerful because it is a constant reminder to us that life is all about finding that harmony out of opposition. When things feel wrong, we likely need to find our balance or center again.
This idea that anxiety is dynamic and changeable blew me away. Sure, anxiety is an inevitable feature of life, and none of us is immune. But understanding anxiety against this more fulsome backdrop has allowed me to stop struggling against it. Instead of treating my feelings as something I need to avoid, suppress, deny, or wrestle to the ground, I have learned how to use anxiety to improve my life.
What a relief. Like all of us, I will always encounter bouts of anxiety. But now, I know what to do when those negative thoughts move into my mind like an unwanted roommate. I can recognize the signals and make adjustments that will take the edge off, calm my body, or settle my mind so I can once again think clearly and feel centered. What a boon to my life â personally, professionally, and certainly emotionally. I feel more satisfaction and meaning from my work. I have finally achieved a work-life balance, some thing that always seemed out of reach. I am also much better able to enjoy myself, find time for different kinds of pleasure, and feel relaxed enough to reflect on what matters most to me. And that's what I desire for you, too.
We tend to think about anxiety as negative because we associate it only with negative, uncomfortable feelings that leave us with the sense that we are out of control. But I could see another way of looking at it once we open ourselves to a more objective, accurate, and complete understanding of its underlying neurobiological processes. Yes, there are inherent challenges to taking ownership of patterns of responding that dictate our thoughts, feelings, and behaviors without our even realizing. If you tend to experience anxiety when you even think about speaking in public, your brain body will more or less dictate that response â unless you consciously intervene and change that response. But I saw evidence of the opposite: that we can intervene and create positive changes to the anxiety state itself.
This dynamic interaction between stress and anxiety made perfect sense to me because it brought me back to the primary area of my neuroscience research: neuroplasticity. Brain plasticity does not mean that the brain is made of plastic. Instead, it means that the brain can adapt in response to the environment (in either enhancing or detrimental ways). The foundation of my research into the improvement of cognition and mood is based on the fact that the brain is an enormously adaptive organ, which relies on stress to keep it alive. In other words, we need stress. Like a sailboat needs wind in order to move, the brain-body needs an outside force to urge it to grow, adapt, and not die. When there's too much wind, the boat can go dangerously fast, lose its balance, and sink. When a brain-body encounters too much stress, it begins to respond negatively. But when it does not have enough stress, it plateaus and begins to coast. Emotionally, this plateau might feel like boredom or disinterest; physically it can look like a stagnation of growth. When the brain-body has just enough stress, it functions optimally. When it has no stress, it simply lists, like a sailboat with no wind to direct it.
Just like every system in the body, this relationship to stress is all about the organism's drive for homeostasis. When we encounter too much stress, anxiety drives us to make adjustments that bring us back into balance or internal equilibrium. When we have just the right kind or amount of stress in our lives, we feel balanced â this is the quality of well-being we always seek. And it's also how anxiety works in the brain-body: it's a dynamic indication of where we are in relation to the presence or absence of stress in our lives.
When I started making changes to my lifestyle and began to meditate, eat healthy, and exercise regularly, my brain-body adjusted and adapted. The neural pathways associated with anxiety recalibrated and I felt awesome! Did my anxiety go away? No. But it showed up differently because I was responding to stress in more positive ways.
And that is exactly how anxiety can shift from something we try to avoid and get rid of to something that is both informative and beneficial. What I was learning how to do, backed up by my experiments and my deep understanding of neuroscience, was not just engage in new and varied ways to shore up my mental health through exercise, sleep, food, and new mind-body practices but to take a step back from my anxiety and learn how to structure my life to accommodate and even honor those things at the heart of my anxious states. This is exactly how anxiety can be good for us. In my own research experiments at NYU, I have started to identify those interventions (including movement, meditation, naps, social stimuli) that have the biggest impact on not only decreasing anxiety levels per se but also enhancing the emotional and cognitive states most affected by anxiety, including focus, attention, depression, and hostility.
And that realization of how anxiety works, my friends, became the subject â and the promise â of this book: Understanding how anxiety works in the brain and body and then using that knowledge to feel better, think more clearly, be more productive, and perform more optimally. In the pages ahead, you will learn more about how you can use the neurobiological processes underlying anxiety, the worry, and general emotional discomfort to lay down new neural pathways, and set down new ways of thinking, feeling, and behaving that can change your life.
Our inherent capacity for adaptation offers the power to change and direct our thoughts, feelings, behaviors, and interactions with ourselves and others. When you adopt strategies that harness the neural networks of anxiety, you open the door to activating your brain-body at an even deeper, more meaningful level. Instead of feeling at the mercy of anxiety, we can take charge of it in concrete ways. Anxiety becomes a tool to supercharge our brains and bodies in ways that will resound in every dimension of our lives â emotionally, cognitively, and physically. This is the domain of what I call anxiety's superpowers. You will shift from living in a moderately functional way to functioning at a higher, more fulfilling level; from living an ordinary life to one that is extraordinary.
My book, GOOD ANXIETY, is about taking everything we know about plasticity to create a personalized strategy of adapting our responses to the stress in our lives and using anxiety as a warning signal and opportunity to redirect that energy for good. Everyone's particular flavor of positive brain plasticity will be a bit different because everyone manifests anxiety in unique ways, but when you learn how you respond, how you manage the discomfort, and how you typically cope and reach for that homeostatic balance, then you will find your own personal superpowers of anxiety. Anxiety can be good... or bad. It turns out that it's really up to you.
This article was originally published on our sister site, Freethink.
Oxford scientists have proposed what they believe is a more sustainable approach to copper mining: digging deep wells under dormant volcanoes to suck out the metal-containing fluids trapped beneath them.
The status quo: Currently, most copper mining is done via open pits. Drills and explosives blast away rock near the surface, which is then transported to a processing facility. There, the rock is crushed so that the tiny portion of copper in it can be extracted.
This extraction process often involves toxic chemicals, and once the copper is removed, the waste rock that remains must be shipped to a disposal site so that it doesn't contaminate the environment.
The challenge: All of the digging, extracting, and transporting involved in copper mining can be energy intensive and environmentally damaging â but the world needs more copper today than ever before.
Electric vehicles contain four times the copper of their fossil fuel-powered counterparts, and the metal is a key component of solar, wind, and hydro generators. That makes copper a key player in the transition to a more sustainable energy system.
The idea: Rather than focusing our copper mining efforts on rock, the Oxford team suggests we look to water â specifically, the hot, salty water trapped beneath dormant volcanoes.
"Volcanoes are an obvious and ubiquitous target." JON BLUNDY
These brines contain not only copper, but also gold, silver, lithium, and other metals used in electronics â and we might be able to extract them without wreaking havoc on the environment.
"Getting to net zero will place unprecedented demand on natural metal resources, demand that recycling alone cannot meet," lead author Jon Blundy said in a press release.
"We need to be thinking of low-energy, sustainable ways to extract metals from the ground," he continued. "Volcanoes are an obvious and ubiquitous target."
Brine mines: After years of research, the Oxford team has published a study on the mining of metals from dormant volcanoes, and according to that paper, the process has tremendous potential â but it wouldn't be easy.
The wells would need to be more than a mile deep, and there's a small chance the extraction could trigger a volcanic event â something that would need to be assessed in advance of any drilling.
The equipment used for the extraction process would also need to be able to withstand corrosion from the brine and temperatures in excess of 800 degrees Fahrenheit.
Worth exploring: If these technical and safety challenges can be overcome, they predict that copper mining at dormant volcanoes would be more cost effective than at open pits.
It would also be less environmentally damaging, as geothermal energy from the volcanoes themselves could be harnessed to power the process.
And because dormant volcanoes are widespread, copper mining wouldn't be limited to just a handful of countries, as is the case currently.
The next steps: The team is now looking for a site to dig an exploratory well, which should help them better understand both potential of tapping into this new source of metal and the challenges involved in the process.
"Green mining is a scientific and engineering challenge which we hope that scientists and governments alike will embrace in the drive to net zero," Blundy said.
A study finds that people associate personality traits with faces.
People thought to have similar personalities were viewed as looking alike; people thought to look alike were viewed as having similar personalities.
The research holds a surprise for Vladimir Putin and Justin Bieber.
Humans are so good at identifying faces that we see them in places where they do not exist, such as on the moon or Mars or in combinations of circles, line segments, and dots. It is a particularly useful skill for a social animal. Yet, how exactly we recognize faces and process them is not exactly known. For instance, the Thatcher effect shows that our brains do not simply accept sensory input when deciding what a normal face looks like.
Now, a new study published in the journal Cognition shows that what we think of a person influences our perception of their facial features. In other words, we think people with similar personality traits look the same.
The social aspect of facial recognition
Image courtesy of NYU's Jonathan Freeman
The initial study, carried out with the help of roughly 200 volunteers, had famous faces placed next to each other above a test picture of one of them. Volunteers had to then move their cursor from the test picture to the image of the same person as quickly as possible. Subjects then rated the likelihood that each famous person in the study had particular personality traits.
The people used in the study, all white men for the sake of consistency, were Justin Bieber, George W. Bush, Bill Clinton, Jimmy Fallon, Ryan Gosling, Matthew McConaughey, Bill Murray, Bill Nye, Vladimir Putin, Keanu Reeves, John Travolta, and Mark Wahlberg, among others.
The results showed that the volunteers were inclined to think that people with similar traits looked more alike than those with differing traits. Three more studies followed to confirm the original findings. Two of them focused on showing that the effect works backward â that is, people with similar faces were thought to have similar traits.
The final test sealed the deal. Participants were shown faces that none of them had ever seen before. Once again, they reported that faces looked similar if they were told the people shared similar personality traits and vice versa.
Senior author Jonathan Freeman of New York University's Department of Psychology summarized the findings in apress release:
"Our findings show that the perception of facial identity is driven not only by facial features, such as the eyes and chin, but also distorted by the social knowledge we have learned about others, biasing it toward alternate identities despite the fact that those identities lack any physical resemblance."
Pootie-Poot and the Bieb
This study adds to the evidence for a "social-conceptual" approach to facial recognition. According to the authors, these models suggest that our ideas of a person are difficult to separate from how we view their faces. As they explain in the introduction of their study:
"[A]ccording [to] these models, after presented with a face, the processing of visual features begins activating identity representations⌠and these in turn begin activating social-conceptual representations, such as personality traits (e.g., bold, diligent, competent)."
Other studies have shown that setting is also important to our ability to recognize faces. Why volunteers think that Justin Bieber and Vladimir Putin look alike remains a bit of a mystery.
Is there an external reality? Is reality objective? Is the information your senses are feeding you an accurate depiction of reality? Most neuroscientists and scientific leaders believe that we can only comprehend a sliver of what is true reality.
Although we assume our senses are telling us the truth, they're actually fabricated to us. Considering senses are unique from person to person, and through our unique senses we can only intemperate a fraction of what is real, there is no all-encompassing and true perspective one individual can hold. Because of this, we need to take our perceptions seriously, but not literally.
Multiple perspectives have to be taken as each new perspective will hold some sliver of truth. Seeing partial truth in multiple perspectives is fundamental to navigating the world and making informed life decisions.
One of the popular memes in literature, movies and tech journalism is that man's creation will rise and destroy it.
Lately, this has taken the form of a fear of AI becoming omnipotent, rising up and annihilating mankind.
The economy has jumped on the AI bandwagon; for a certain period, if you did not have "AI" in your investor pitch, you could forget about funding. (Tip: If you are just using a Google service to tag some images, you are not doing AI.)
However, is there actually anything deserving of the term AI? I would like to make the point that there isn't, and that our current thinking is too focused on working on systems without thinking much about the humans using them, robbing us of the true benefits.
What companies currently employ in the wild are nearly exclusively statistical pattern recognition and replication engines. Basically, all those systems follow the "monkey see, monkey do" pattern: They get fed a certain amount of data and try to mimic some known (or fabricated) output as closely as possible.
When used to provide value, you give them some real-life input and read the predicted output. What if they encounter things never seen before? Well, you better hope that those "new" things are sufficiently similar to previous things, or your "intelligent" system will give quite stupid responses.
But there is not the slightest shred of understanding, reasoning and context in there, just simple re-creation of things seen before. An image recognition system trained to detect sheep in a picture does not have the slightest idea what "sheep" actually means. However, those systems have become so good at recreating the output, that they sometimes look like they know what they are doing.
Isn't that good enough, you may ask? Well, for some limited cases, it is. But it is not "intelligent", as it lacks any ability to reason and needs informed users to identify less obvious outliers with possibly harmful downstream effects.
The ladder of thinking has three rungs, pictured in the graph below:
Imitation: You imitate what you have been shown. For this, you do not need any understanding, just correlations. You are able to remember and replicate the past. Lab mice or current AI systems are on this rung.
Intervention: You understand causal connections and are able to figure out what would happen if you now would do this, based on what you learned about the world in the past. This requires a mental model of the part of the world you want to influence and the most relevant of its downstream dependencies. You are able to imagine a different future. You meet dogs and small children on that rung, so it is not a bad place to be.
Counterfactual reasoning: The highest rung, where you wonder what would have happened, had you done this or that in the past. This requires a full world model and a way to simulate the world in your head. You are able to imagine multiple pasts and futures. You meet crows, dolphins and adult humans here.
In order to ascend from one rung to the next, you need to develop a completely new set of skills. You can't just make an imitation system larger and expect it to suddenly be able to reason. Yet this is what we are currently doing with our ever-increasing deep learning models: We think that by giving them more power to imitate, they will at some point magically develop the ability to think. Apart from self-delusional hope and selling nice stories to investors and newspapers, there is little reason to believe that.
And we haven't even touched the topic of computational complexity and economical and ecological impact of ever-growing models. We might simply not be able to grow our models to the size needed, even if the method worked (which it doesn't, so far).
Whatever those systems create is the mere semblance of intelligence and in pursuing the goal of generating artificial intelligence by imitation, we are following a cargo cult.
Instead, we should get comfortable with the fact that the current ways will not achieve real AI, and we should stop calling it that. Machine learning (ML) is a perfectly fitting term for a tool with awesome capabilities in the narrow fields where it can be applied. And with any tool, you should not try to make the entire world your nail, but instead find out where to use it and where not.
Machines are strong when it comes to quickly and repeatedly performing a task with minimal uncertainty. They are the ruling class of the first rung.
Humans are strong when it comes to context, understanding and making sense with very little data at hand and high uncertainties. They are the ruling class of the second and third rung.
So what if we focus our efforts away from the current obsession with removing the human element from everything and thought about combining both strengths? There is an enormous potential in giving machine learning systems the optimal, human-centric shape, in finding the right human-machine interface, so that both can shine. The ML system prepares the data, does some automatable tasks and then hands the results to the human, who further handles them according to context.
ML can become something like good staff to a CEO, a workhorse to a farmer or a good user interface to an app user: empowering, saving time, reducing mistakes.
Building a ML system for a given task is rather easy and will become ever easier. But finding a robust, working integration of the data and the pre-processed results of the data with the decision-maker (i.e. human) is a hard task. There is a reason why most ML projects fail at the stage of adoption/integration with the organization seeking to use them.
Solving this is a creative task: It is about domain understanding, product design and communication. Instead of going ever bigger to serve, say, more targetted ads, the true prize is in connecting data and humans in clever ways to make better decisions and be able to solve tougher and more important problems.
Republished with permission of the World Economic Forum. Read the original article.
Many small animals grow their teeth, claws and other âtools" out of materials that are filled with zinc, bromine and manganese, reaching up to 20% of the material's weight.
My colleagues and I call these âheavy element biomaterials," and in a new paper, we suggest that these materials make it possible for animals to grow scalpel-sharp and precisely shaped tools that are resistant to breaking, deformation and wear.
We examined ant mandible teeth and found that they are a smooth mix of proteins and zinc, with single zinc atoms attached to about a quarter of the amino acid units that make up the proteins forming the teeth. In contrast, calcified tools â like human teeth â are made of relatively large chunks of calcium minerals. We think the lack of chunkiness in heavy element biomaterials makes them better than calcified materials at forming smooth, precisely shaped and extremely sharp tools.
To evaluate the advantages of heavy element biomaterials, we estimated the force, energy and muscle size required for cutting with tools made of different materials. Compared with other hard materials grown by these animals, the wear-resistant zinc material enables heavily used tools to puncture stiff substances using only one-fifth of the force. The estimated advantage is even greater relative to calcified materials that â since they can't be nearly as sharp as heavy element biomaterials - can require more than 100 times as much force.
Biomaterials that incorporate zinc (red) and manganese (orange) are located in the important cutting and piercing edges of ant mandibles, worm jaws and other 'tools.' (Robert Schofield, CC BY-ND)
Why it matters
It's not surprising that materials that could make sharp tools would evolve in small animals. A tick and a wolf both need to puncture the same elk skin, but the wolf has vastly stronger muscles. The tick can make up for its tiny muscles by using sharper tools that focus force onto smaller regions.
But, like a sharp pencil tip, sharper tool tips break more easily. The danger of fracture is made even worse by the tendency for small animals to extend their reach using long thin tools â like those pictured above. And a chipped claw or tooth may be fatal for a small animal that doesn't have the strength to cut with blunted tools.
From an evolutionary perspective, these materials allow smaller animals to consume tougher foods. And the energy saved by using less force during cutting can be important for any animal. These advantages may explain the widespread use of heavy element biomaterials in nature â most ants, many other insects, spiders and their relatives, marine worms, crustaceans and many other types of organisms use them.
What still isn't known
While my team's research has clarified the advantages of heavy element biomaterials, we still don't know exactly how zinc and manganese harden and protect the tools.
One possibility is that a small fraction of the zinc, for example, forms bridges between proteins, and these cross-links stiffen the material â like crossbeams stiffen a building. We also think that when a fang bangs into something hard, these zinc cross-links may break first, absorbing energy to keep the fang itself from chipping.
We speculate that the abundance of extra zinc is a ready supply for healing the material by quickly reestablishing the broken zinc-histidine cross-links between proteins.
What's next?
The potential that these materials are self-healing makes them even more interesting, and our team's next step is to test this hypothesis. Eventually we may find that self-healing or other features of heavy element biomaterials could lead to improved materials for things like small medical devices.
Mass psychogenic illness, also known as mass hysteria, is when a group of people manifest physical symptoms from imagined threats.
History is littered with outbreaks of mass hysteria.
Recently, alleged cases of Tourette's syndrome appeared all over the world. Was it real or mass psychogenic illness?
While the term is often avoided for fear of ridiculing something more serious,mass psychogenic illness (MPI) â also known as mass sociogenic illness (MSI) or mass hysteria â is a real occurrence that can cause a variety of physical symptoms to manifest in groups of people despite the lack of any physical cause. Often compared toconversion disorder, in which emotional issues are "converted" into physical problems, MPI tends to occur among people who share anxieties, fears, and a sense of community. In the right group of people, it can spread like a virus.
A curious case of the condition related to TikTok videos shows both how imagined conditions can spread and how our modern media landscape presents new problems never even dreamt of in a time before the internet.
TikTok tics
In 2019, a strange slew ofnew Tourette's cases made its way into hospitals all over the world. Oddly, these were suddenly occurring in children well over the age of six, the age of typical onset. Most peculiar of all, many of the patients were exhibiting identical symptoms and tics. While many cases of Tourette's are similar, these symptoms were precisely the same.
As it turned out, the tics were also identical to those exhibited by one Jan Zimmermann, a 23-year-old YouTuber from Germany with Tourette's. On his channel,Gewitter im Kopf, he documents his daily life with the condition. All of the patients who suddenly claimed to have tics were fans of his or of similar channels on YouTube and TikTok.
There was nothing physically wrong with the large number of people who suddenly came down with Tourette's-like symptoms, and most of them recovered immediately after being told that they did not have Tourette's syndrome. Others recovered after brief psychologicalinterventions. The spread of the condition across a social group despite the lack of a physical cause all pointed toward an MPI event.
Historical cases of mass hysteria
Of course, humans do not need social media to develop symptoms of a disease that they do not have. Several strange cases of what appears to have been mass hysteria exist throughout history. While some argue for a physical cause in each case, the consensus is that the ultimate cause was psychological.
The dancing plagues of the Middle Ages, in which hundreds of people began to dance until they were utterly exhausted despite apparently wishing to stop, are thought to have been examples of mass madness. Some cases also involved screaming, laughing, having violent reactions to the color red, and lewd behavior. Attempts to calm the groups by providing musicians just made the problem worse, as people joined in to dance to the music. By the time the dancing plague of1518 ended, several people had died of exhaustion or injuries sustained during their dance marathon.
It was also common for nunneries to get outbreaks of what was then considered demonic possession but what now appears to be MPI. In many well recorded cases, young nuns â often cast into a life of poverty and severe discipline with little to say about it â suddenly found themselves "possessed" and began behaving in extremely un-nunlikefashion. These instances often spread to other members of the convent and required intervention by exorcists to resolve.
A more recent example might be the curious story of theMad Gasser of Mattoon. During WWII in the small town of Mattoon, Illinois, 33 people awoke in the middle of the night to a "sweet smell" in their homes followed by symptoms such as nausea, vomiting, and paralysis. Many claimed to see a figure outside their rooms fleeing the scene. Claims of gassings rapidly followed the initial cases, and the police department was swamped with reports that amounted to nothing. The cases ended after the sheriff threatened to arrest anyone submitting a report of being gassed without agreeing to a medicalreview.
Each of these cases exhibits the generally agreed upon conditions for MPI: the people involved were a cohesive group, they all agreed on the same threats existing, and they were enduring stressful and emotional conditions that later manifested as physical symptoms. Additionally, the symptoms appeared suddenly and spread by sight and communication among the affected individuals.
Social diseases for a social media age
One point upon which most sources on MPI agree is the tendency of the outbreaks to occur among cohesive groups whose members are in regular contact. This is easy to see in the above examples: nuns live together in small convents, medieval peasants did not travel much, and the residents of Mattoon were in a small community.
This makes the more recent case that relies on the internet all the more interesting. And it's not the only one. Another MPI centered around a school in New York in2011.
As a result, a team of German researchers has put forth the idea of a new version of MPI for the modern age: "mass social media-induced illness." It is similar to MPI but differs in that it is explicitly for cases driven by social media, in which people suffering from the same imagined symptoms never actually come into direct contact with one another.
Of course, these researchers are not the first to consider the problem in a digital context. Dr. Robert Bartholomew described the aforementioned New York case in a paper published in the Journal of the Royal Society of Medicine.
All this seems to imply that our online interactions can affect us in much the same ways as direct communication has for ages past and that the social groups we form online can be cohesive enough to cause identical symptoms in people who have never met. Therefore, we likely have not seen the last of "mass social media-induced illness."
By studying the characteristics of stars, like their temperature and luminosity, astrophysicists figured out how stars evolve over time.
This amazing insight is the primary lesson of the Hertzsprung-Russell (HR) diagram.
Human beings, as the species Homo sapiens, have been around for about 300,000 years. That turns out to be about 100 million nights during which somebody, somewhere looked up at the dark sky and asked, "What are those twinkly lights?"
Given all those nights and all those people asking pretty much the same question, it is pretty remarkable that we happen to live in one of the first generations that actually knows the answer. Here in the 21st century, we know for sure what stars are, and a key reason we have that knowledge is because of a little something called the HR diagram. Over the summer, I wrote two other posts on what I called the "most important graph in astrophysics." Today, I want to finish the series by explaining how the HR diagram shows us how stars age and evolve.
Stellar evolution: a star's life cycle
You can read the first and second posts here and here, respectively. But for completeness, let's restate that the HR diagram is a plot with stellar luminosity (L for energy output) on the vertical axis and stellar surface temperature (T for temperature) on the horizonal axis. In the previous posts, we learned that when you measure L and T for a bunch of stars and then drop them onto this kind of plot, you find the majority of the points fall on a thick diagonal band running from high stellar luminosity and temperature (high L and T) to low stellar luminosity and temperature (low L and T). That band is what astronomers call the Main Sequence, and its discovery in the HR diagram was key to understanding what stars were and how they shined.
What the Main Sequence revealed were stars in their long middle age. Middle-aged stars (meaning stars in between their relatively short birth and death phases) support themselves against their own crushing, titanic gravity by releasing energy through fusion reactions in their hot, dense cores. Hydrogen nuclei are fused into helium nuclei, giving up a little energy along the way through good ol' E = mc2.
As long as there is hydrogen to burn in the core, a star is stable, happy, and free to shine its brilliance into the dark night of space. Luckily stars have lots of hydrogen to burn. A star like the sun contains about a billion billion billion tons of hydrogen gas. That translates into about 10 billion years of life on the Main Sequence. But a billion billion billion tons of gas is not infinite. Eventually, the hydrogen fusion party must end. The star will run out of fuel in the core, and that is when it stops being middle-aged.
What happens next is also revealed by the HR diagram, which once again, is why it is the most important graph in astrophysics. When astronomers first started dropping their stars onto the diagram more than 100 years ago, they saw not only the Main Sequence but also stars clustered in other places. There were lots of moderately bright stars with low temperatures (high L and low T). There were also lots of really, really bright stars with even lower temperatures (very high L and lower T). Using the laws of physics associated with hot glowing matter, astronomers could derive the sizes of these bright cool stars and found that they were much bigger than the sun. They identified giant stars (the bright ones), which were 10 times the size of the sun, and supergiants (the really, really bright ones), which were 100 times the size of the sun.
These various kinds of giant stars on the HR diagram were the all-important evidence for the evolution of stars. Stellar properties were not static. They aged and changed just like we did. Astrophysicists eventually saw that the evolution of a star on the HR diagram was driven by the evolution of nuclear burning in its core. As researchers got better at modeling what happens within stars as they age, they came to see that after the hydrogen fuel runs out in the core, gravity begins to crush what is left: inert helium "ash."
Eventually, the gravitational squeeze drives temperatures and densities in the core high enough to ignite the helium ash, allowing the helium nuclei to fuse into carbon nuclei. These internal changes rearrange the outer layers of the star, making them swell and bloat â first into the giants, and then into the supergiants. The details of why they get so large are complicated and require lots of detailed calculations (done with computers). What matters for us is that what comes out of those calculations are evolutionary tracks across the HR diagram. The tracks are predictions, telling astronomers how changes in a star's nuclear burning history will manifest in it its luminosity and temperature which, in turn, translates into how it will move across the HR diagram over time.
The changes for actual stars are too slow to watch over a human lifetime. But by taking measurements of lots of random stars (meaning they are at random points in their evolution), we can find the older ones in their giant or supergiant phases. Then, via some statistics, astronomers can then see if their theoretical evolutionary tracks match what they see in the HR diagram. The answer is a resounding yes.
So not only do we know what stars are (big balls of mostly hydrogen gas with a fusion furnace in the core), but we also know exactly how those luminous spheres evolve across billions of years of cosmic history â including lighting up the nights for a remarkable planet that is home to some remarkable hairless monkeys.
However, it's still relatively expensive to store energy. And since renewable energy generation isn't available all the time â it happens when the wind blows or the sun shines â storage is essential.
Here are three emerging technologies that could help make this happen.
Longer charges
From alkaline batteries for small electronics to lithium-ion batteries for cars and laptops, most people already use batteries in many aspects of their daily lives. But there is still lots of room for growth.
For example, high-capacity batteries with long discharge times â up to 10 hours â could be valuable for storing solar power at night or increasing the range of electric vehicles. Right now there are very few such batteries in use. However, according to recent projections, upwards of 100 gigawatts' worth of these batteries will likely be installed by 2050. For comparison, that's 50 times the generating capacity of Hoover Dam. This could have a major impact on the viability of renewable energy.
Batteries work by creating a chemical reaction that produces a flow of electrical current.
One of the biggest obstacles is limited supplies of lithium and cobalt, which currently are essential for making lightweight, powerful batteries. According to some estimates, around 10% of the world's lithium and nearly all of the world's cobalt reserves will be depleted by 2050.
Furthermore, nearly 70% of the world's cobalt is mined in the Congo, under conditions that have long been documented as inhumane.
Scientists are working to develop techniques for recycling lithium and cobalt batteries, and to design batteries based on other materials. Tesla plans to produce cobalt-free batteries within the next few years. Others aim to replace lithium with sodium, which has properties very similar to lithium's but is much more abundant.
Safer batteries
Another priority is to make batteries safer. One area for improvement is electrolytes â the medium, often liquid, that allows an electric charge to flow from the battery's anode, or negative terminal, to the cathode, or positive terminal.
When a battery is in use, charged particles in the electrolyte move around to balance out the charge of the electricity flowing out of the battery. Electrolytes often contain flammable materials. If they leak, the battery can overheat and catch fire or melt.
Scientists are developing solid electrolytes, which would make batteries more robust. It is much harder for particles to move around through solids than through liquids, but encouraging lab-scale results suggest that these batteries could be ready for use in electric vehicles in the coming years, with target dates for commercialization as early as 2026.
While solid-state batteries would be well suited for consumer electronics and electric vehicles, for large-scale energy storage, scientists are pursuing all-liquid designs called flow batteries.
A typical flow battery consists of two tanks of liquids that are pumped past a membrane held between two electrodes. ( Qi and Koenig, 2017, CC BY)
In these devices both the electrolyte and the electrodes are liquids. This allows for super-fast charging and makes it easy to make really big batteries. Currently these systems are very expensive, but research continues to bring down the price.
Storing sunlight as heat
Other renewable energy storage solutions cost less than batteries in some cases. For example, concentrated solar power plants use mirrors to concentrate sunlight, which heats up hundreds or thousands of tons of salt until it melts. This molten salt then is used to drive an electric generator, much as coal or nuclear power is used to heat steam and drive a generator in traditional plants.
These heated materials can also be stored to produce electricity when it is cloudy, or even at night. This approach allows concentrated solar power to work around the clock.
This idea could be adapted for use with nonsolar power generation technologies. For example, electricity made with wind power could be used to heat salt for use later when it isn't windy.
Concentrating solar power is still relatively expensive. To compete with other forms of energy generation and storage, it needs to become more efficient. One way to achieve this is to increase the temperature the salt is heated to, enabling more efficient electricity production. Unfortunately, the salts currently in use aren't stable at high temperatures. Researchers are working to develop new salts or other materials that can withstand temperatures as high as 1,300 degrees Fahrenheit (705 C).
One leading idea for how to reach higher temperature involves heating up sand instead of salt, which can withstand the higher temperature. The sand would then be moved with conveyor belts from the heating point to storage. The Department of Energy recently announced funding for a pilot concentrated solar power plant based on this concept.
Advanced renewable fuels
Batteries are useful for short-term energy storage, and concentrated solar power plants could help stabilize the electric grid. However, utilities also need to store a lot of energy for indefinite amounts of time. This is a role for renewable fuels like hydrogen and ammonia. Utilities would store energy in these fuels by producing them with surplus power, when wind turbines and solar panels are generating more electricity than the utilities' customers need.
Today these fuels are mostly made from natural gas or other nonrenewable fossil fuels via extremely inefficient reactions. While we think of it as a green fuel, most hydrogen gas today is made from natural gas.
Scientists are looking for ways to produce hydrogen and other fuels using renewable electricity. For example, it is possible to make hydrogen fuel by splitting water molecules using electricity. The key challenge is optimizing the process to make it efficient and economical. The potential payoff is enormous: inexhaustible, completely renewable energy.
5 Southern cities where houses are shrinking In the South, the typical cost of a home has gone up, but the typical home on the market has gotten smaller. Wed, 11 Oct 2023 16:28:49 +0000
EVâs suddenly become uninsurable (unless you are rich) https://joannenova.com.au/2023/10/evs-suddenly-become-uninsurable-unless-you-are-rich/ By Jo Nova Remember how we predicted insurance costs would rise when people realized that almost brand spanking new EVâs were being written off for scratches, because no one could test their battery and be sure it would not ignite? And then there was the news that after an accident, electric cars need to social distance, […] Fri, 06 Oct 2023 20:27:44 +0000
Morano: 'All of this is to demonize Exxon and oil use. And, it's a shakedown for money because you have lawsuits like in California and many other lawsuits not specifically against Exxon, but against energy companies that basically says every time a bad weather event happens the energy companies are responsible because they released CO2 which caused bad weather." Mon, 02 Oct 2023 16:58:16 +0000
Joey Jones: The poll shows 41% of the French population thinks travelers should restrict airline travel to only four times in their entire life. Climate Depot publisher Marc Morano joins us now to react mark that is a staggering number. Help us out.Â
Marc Morano: "Back in May, the French government mandated flights of two and a half hours or less be canceled to save the climate. So this is what the French public has been getting indoctrinated in, that flying is evil. They believe we're in a climate emergency...The EU is looking at flight bans for when you take instead take a train ride that takes under six hours. Bloomberg News said cheap airline flights are now a thing of the past due to 'climate compliance' costs. This is truly, 'You will go nowhere and be happy'. The public is being severely indoctrinated in this, and they're told that the earth can't handle it unless we radically change our lives. And most of these decisions are being imposed on us: the ban on gas-powered cars, the agricultural restrictions, the meat restrictions, flight bans. This is our world if we allow it. ...Â
Morano: This isn't unusual, in a climate emergency declaration which Joe Biden wants to do, and NBC News said about three weeks ago would give him the same powers as 911 Emergency executive powers and COVID emergency powers that would literally give the climate activists the ability to tell you you can't fly unless you have a 'morally justifiable' reason -- was the exact phrase -- you can't fly, and they're taking away our cars as well rationing vehicles by banning it." Mon, 02 Oct 2023 10:31:06 +0000
Repeated gunshots fired on live TV as ex-lawmaker shot by assassins Atiq Ahmed, a former lawmaker in India's parliament, convicted of kidnapping, was shot dead along with his brother while police were escorting them for a medical check-up in a slaying caught on live television on Saturday. CNN's Vedika Sud reports. Tue, 18 Apr 2023 09:41:42 GMT
Michael Jordan's 1998 NBA Finals sneakers sell for a record $2.2 million In 1998, Michael Jordan laced up a pair of his iconic black and red Air Jordan 13s to bring home a Bulls victory during Game 2 of his final NBA championship â and now they are the most expensive sneakers ever to sell at auction.
The game-winning sneakers sold for $2.2 million at Sotheby's in New York on Tuesday, smashing the sneaker auction record of $1.47 million, set in 2021 by a pair of Nike Air Ships that Jordan wore earlier in his career. Wed, 12 Apr 2023 03:17:57 GMT