May 232023
 

In the last several years, we worked out most of the details about the Solar Gravitational Lens. How it forms images. How its optical qualities are affected by the inherent spherical aberration of a gravitational lens. How the images are further blurred by deviations of the lens from perfect spherical symmetry. How the solar corona contributes huge amounts of noise and how it can be controlled when the image is reconstructed. How the observing spacecraft would need to be navigated in order to maintain precise positions within the image projected by the SGL.

But one problem remained unaddressed: The target itself. Specifically, the fact that the target planet that we might be observing is not standing still. If it is like the Earth, it spins around its axis once every so many hours. And as it orbits its host star, its illumination changes as a result.

In other words, this is not what we are up against, much as we’d prefer the exoplanet to play nice and remain motionless and fully illuminated at all times.

Rather, what we are against is this:

Imaging such a moving target is hard. Integration times must be short in order to avoid motion blur. And image reconstruction must take into account how specific surface features are mapped onto the image plane. An image plane that, as we recall, we sample one “pixel” at a time, as the projected image of the exoplanet is several kilometers wide. It is traversed by the observing spacecraft that, looking back at the Sun, measures the brightness of the Einstein ring surrounding the Sun, and reconstructs the image from this information.

This is a hard problem. I think it is doable, but this may be the toughest challenge yet.

Oh, and did I mention that (not shown in the simulation) the exoplanet may also have varying cloud cover? Not to mention that, unlike this visual simulation, a real exoplanet may not be a Lambertian reflector, but rather, different parts (oceans vs. continents, mountain ranges vs. plains, deserts vs. forests) may have very different optical properties, varying values of specularity or even more complex optical behavior?

 Posted by at 12:06 am
May 232022
 

This morning, Google greeted me with a link in its newsstream to a Hackaday article on the Solar Gravitational Lens. The link caught my attention right away, as I recognized some of my own simulated, SGL-projected images of an exo-Earth and its reconstruction.

Reading the article I realized that it appeared in response to a brand new video by SciShow, a science-oriented YouTube channel.

Yay! I like nicely done videos presenting our work and this one is fairly good. There are a few minor inaccuracies, but nothing big enough to be even worth mentioning. And it’s very well presented.

I suppose I should offer my thanks to SciShow for choosing to feature our research with such a well-produced effort.

 Posted by at 7:22 pm
May 062022
 

A beautiful study was published the other day, and it received a lot of press coverage, so I get a lot of questions.

This study shows how, in principle, we could reconstruct the image of an exoplanet using the Solar Gravitational Lens (SGL) using just a single snapshot of the Einstein ring around the Sun.

The problem is, we cannot. As they say, the devil is in the details.

Here is a general statement about any conventional optical system that does not involve more exotic, nonlinear optics: whatever the system does, ultimately it maps light from picture elements, pixels, in the source plane, into pixels in the image plane.

Let me explain what this means in principle, through an extreme example. Suppose someone tells you that there is a distant planet in another galaxy, and you are allowed to ignore any contaminating sources of light. You are allowed to forget about the particle nature of light. You are allowed to forget the physical limitations of your cell phone’s camera, such as its CMOS sensor dynamic range or readout noise. You hold up your cell phone and take a snapshot. It doesn’t even matter if the camera is not well focused or if there is motion blur, so long as you have precise knowledge of how it is focused and how it moves. The map is still a linear map. So if your cellphone camera has 40 megapixels, a simple mathematical operation, inverting the so-called convolution matrix, lets you reconstruct the source in all its exquisite detail. All you need to know is a precise mathematical description, the so-called “point spread function” (PSF) of the camera (including any defocusing and motion blur). Beyond that, it just amounts to inverting a matrix, or equivalently, solving a linear system of equations. In other words, standard fare for anyone studying numerical computational methods, and easily solvable even at extreme high resolutions using appropriate computational resources. (A high-end GPU in your desktop computer is ideal for such calculations.)

Why can’t we do this in practice? Why do we worry about things like the diffraction limit of our camera or telescope?

The answer, ultimately, is noise. The random, unpredictable, or unmodelable element.

Noise comes from many sources. It can include so-called quantization noise because our camera sensor digitizes the light intensity using a finite number of bits. It can include systematic noises due to many reasons, such as differently calibrated sensor pixels or even approximations used in the mathematical description of the PSF. It can include unavoidable, random, “stochastic” noise that arises because light arrives as discrete packets of energy in the form of photons, not as a continuous wave.

When we invert the convolution matrix in the presence of all these noise sources, the noise gets amplified far more than the signal. In the end, the reconstructed, “deconvolved” image becomes useless unless we had an exceptionally high signal-to-noise ratio, or SNR, to begin with.

The authors of this beautiful study knew this. They even state it in their paper. They mention values such as 4,000, even 200,000 for the SNR.

And then there is reality. The Einstein ring does not appear in black, empty space. It appears on top of the bright solar corona. And even if we subtract the corona, we cannot eliminate the stochastic shot noise due to photons from the corona by any means other than collecting data for a longer time.

Let me show a plot from a paper that is work-in-progress, with the actual SNR that we can expect on pixels in a cross-sectional view of the Einstein ring that appears around the Sun:

Just look at the vertical axis. See those values there? That’s our realistic SNR, when the Einstein ring is imaged through the solar corona, using a 1-meter telescope with a 10 meter focal distance, using an image sensor pixel size of a square micron. These choices are consistent with just a tad under 5000 pixels falling within the usable area of the Einstein ring, which can be used to reconstruct, in principle, a roughly 64 by 64 pixel image of the source. As this plot shows, a typical value for the SNR would be 0.01 using 1 second of light collecting time (integration time).

What does that mean? Well, for starters it means that to collect enough light to get an SNR of 4,000, assuming everything else is absolutely, flawlessly perfect, there is no motion blur, indeed no motion at all, no sources of contamination other than the solar corona, no quantization noise, no limitations on the sensor, achieving an SNR of 4,000 would require roughly 160 billion seconds of integration time. That is roughly 5,000 years.

And that is why we are not seriously contemplating image reconstruction from a single snapshot of the Einstein ring.

 Posted by at 4:01 pm
Mar 242022
 

Between a war launched by a mad dictator, an occupation by “freedom convoy” mad truckers, and other mad shenanigans, it’s been a while since I last blogged about pure physics.

Especially about a topic close to my heart, modified gravity. John Moffat’s modified gravity theory MOG, in particular.

Back in 2020, a paper was published arguing that MOG may not be able to account for the dynamics certain galaxies. The author studied a large, low surface brightness galaxy, Antlia II, which has very little mass, and concluded that the only way to fit MOG to this galaxy’s dynamics is by assuming outlandish values not only for the MOG theory’s parameters but also the parameter that characterizes the mass distribution in the galaxy itself.

In fact, I would argue that any galaxy this light that does not follow Newtonian physics is bad news for modified theories of gravity; these theories predict deviations from Newtonian physics for large, heavy galaxies, but a galaxy this light is comparable in size to large globular clusters (which definitely behave the Newtonian way) so why would they be subject to different rules?

But then… For many years now, John and I (maybe I should only speak for myself in my blog, but I think John would concur) have been cautiously, tentatively raising the possibility that these faint satellite galaxies are really not very good test subjects at all. They do not look like relaxed, “virialized” mechanical systems; rather, they appear tidally disrupted by the host galaxy the vicinity of which they inhabit.

We have heard arguments that this cannot be the case, that these satellites show no signs of recent interaction. And in any case, it is never a good idea for a theorist to question the data. We are not entitled to “alternative facts”.

But then, here’s a paper from just a few months ago with a very respectable list of authors on its front page, presenting new observations of two faint galaxies, one being Antlia II: “Our main result is a clear detection of a velocity gradient in Ant2 that strongly suggests it has recently experienced substantial tidal disruption.”

I find this result very encouraging. It is consistent with the basic behavior of the MOG theory: Systems that are too light to show effects due to modified gravity exhibit strictly Newtonian behavior. This distinguishes MOG from the popular MOND paradigm, which needs the somewhat ad hoc “external field effect” to account for the dynamics of diffuse objects that show no presence of dark matter or modified gravity.

 Posted by at 2:30 am
Feb 222022
 

This is the last moment until well into the 22nd century that the current time and date in UTC can be expressed using only two digits.

I can only hope that this date will not be memorable for another reason, you know, something like the start of WW3?

 Posted by at 5:22 pm
Feb 192022
 

The 64-antenna radio telescope complex, MeerKAT, is South Africa’s contribution to the Square Kilometer Array, an international project under development to create an unprecedented radio astronomy facility.

While the SKA project is still in its infancy, MeerKAT is fully functional, and it just delivered the most detailed, most astonishing images yet of the central region of our own Milky Way. Here is, for instance, an image of the Sagittarius A region that also hosts the Milky Way’s supermassive black hole, Sgr A*:

The filamentary structure that is seen in this image is apparently poorly understood. As for the scale of this image, notice that it is marked in arc seconds; at the estimated distance to Sgr A, one arc second translates into roughly 1/8th of a light year, so the image presented here is roughly a 15 by 15 light year area.

 Posted by at 5:04 pm
Mar 142021
 

The next in our series of papers describing the extended gravitational lens (extended, that is, in that we are no longer treating the lensing object as a gravitational monopole) is now out, on arXiv.

Here’s one of my favorite images from the paper, which superimposes the boundary of the quadrupole caustic (an astroid curve) onto a 3D plot showing the amplitude of the gravitational lens’s point-spread function.

I was having lots of fun working on this paper. It was, needless to say, a lot of work.

 Posted by at 9:18 pm
Dec 152020
 

A very nice article about our work on the Solar Gravitational Lens was published a few days ago on Universe Today, on account of our recent preprint, which shows quantitative results assessing the impact of image reconstruction on signal and noise.

Because the SGL is such an imperfect lens, the noise penalty is substantial. However, as it turns out, it is much reduced when the projected image area is large, such as when an exoplanet in a nearby star system is targeted.

While this is good news, the Sun’s gravitational field has other imperfections. We are currently working on modeling these and assessing their impact on noise. Next comes the problem of imaging a moving target: an exoplanet that spins, which is illuminated from varying directions, and which may have varying surface features (clouds, vegetation, etc.) Accounting for all these effects is essential if we wish to translate basic theory into sound scientific and engineering requirements.

So, the fun continues. For now, it was nice to see this piece in Universe Today.

 Posted by at 11:08 pm
Dec 022020
 

According to the immortal Douglas Adams, God’s final message to His creation is simple: “We apologize for the inconvenience.”

But there’s also another final message of sorts, the answer to the Ultimate Question about Life, Universe, and Everything: 42.

Recently, a researcher by the name of Michael Hippke analyzed the seemingly random bits that are contained in minute fluctuations of the Cosmic Microwave Background (CMB) radiation. His conclusion: there is no discernible pattern, no appearance of constants of nature, no detectable statistical autocorrelation. The message is random.

I beg to respectfully disagree. In the 512-bit segment published by Hippke, the bit sequence 101010 appears no fewer than eight, er, nine times (one occurrence split between two lines).

Now if we only knew the question to which the answer is 42…

 Posted by at 1:18 pm
Dec 012020
 

The giant Arecibo radio telescope is no more.

Damaged by a broken cable just a few weeks ago, the telescope completely collapsed today.

Incredibly sad news.

Completed in 1963, the telescope was 57 years old, just like me. I hope I will last a few more years, though.

 Posted by at 9:52 pm
Sep 032020
 

Tonight, Slava Turyshev sent me a link to an article that was actually published three months ago on medium.com but until now, escaped our attention.

It is a very nice summary of the work that we have been doing on the Solar Gravitational Lens to date.

It really captures the essence of our work and the challenges that we have been looking at.

And there is so much more to do! Countless more things to tackle: image reconstruction of a moving target, imperfections of the solar gravitational field, precision of navigation… not to mention the simple, basic challenge of attempting a deep space mission to a distance four times greater than anything to date, lasting several decades.

Yes, it can be done. No it’s not easy. But it’s a worthy challenge.

 Posted by at 10:54 pm
Jul 312020
 

A few weeks ago, Christian Ready published a beautiful video on his YouTube channel, Launch Pad Astronomy. In this episode, he described in detail how the Solar Gravitational Lens (SGL) works, and also our efforts so far.

I like this video very much. Especially the part that begins at 10:28, where Christian describes how the SGL can be used for image acquisition. The entire video is well worth seeing, but this segment in particular does a better job than we were ever able to do with words alone, explaining how the Sun projects an image of a distant planet to a square kilometer sized area, and how this image is scanned, one imaginary pixel at a time, by measuring the brightness of the Einstein-ring around the Sun as seen from each pixel location.

We now understand this process well, but many more challenges remain. These include, in no particular order, deviations of the Sun from spherical symmetry, minor variations in the brightness of the solar corona, the relative motion of the observing probe, Sun, exosolar system and target planet therein, changing illumination of the target, rotation of the target, changing surface features (weather, perhaps vegetation) of the target, and the devil knows what else.

Even so, lately I have become reasonably confident, based on my own simulation work and our signal-to-noise estimates, as well as a deconvolution approach under development that takes some of the aforementioned issues into consideration, that a high-resolution image of a distant planet is, in fact, obtainable using the SGL.

A lot more work remains. The fun only just began. But I am immensely proud to be able to contribute to of this effort.

 Posted by at 7:41 pm
Feb 172020
 

Our most comprehensive paper yet on the Solar Gravitational Lens is now online.

This was a difficult paper to write, but I think that, in the end, it was well worth the effort.

We are still investigating the spherical Sun (the gravitational field of the real Sun deviates ever so slightly from spherical symmetry, and that can, or rather it will, have measurable effects) and we are still considering a stationary target (as opposed to a planet with changing illumination and surface features) but in this paper, we now cover the entire image formation process, including models of what a telescope sees in the SGL’s focal region, how such observations can be stitched together to form an image, and how that image compares against the inevitable noise due to the low photon count and the bright solar corona.

 Posted by at 11:37 pm
Aug 072019
 

Yesterday, we posted our latest paper on arXiv. Again, it is a paper about the solar gravitational lens.

This time around, our focus was on imaging an extended object, which of course can be trivially modeled as a multitude of point sources.

However, it is a multitude of point sources at a finite distance from the Sun.

This adds a twist. Previously, we modeled light from sources located at infinity: Incident light was in the form of plane waves.

But when the point source is at a finite distance, light from it comes in the form of spherical waves.

Now it is true that at a very large distance from the source, considering only a narrow beam of light, we can approximate those spherical waves as plane waves (paraxial approximation). But it still leaves us with the altered geometry.

But this is where a second observation becomes significant: As we can intuit, and as it is made evident through the use of the eikonal approximation, most of the time we can restrict our focus onto a single ray of light. A ray that, when deflected by the Sun, defines a plane. And the investigation can proceed in this plane.

The image above depicts two such planes, corresponding to the red and the green ray of light.

These rays do meet, however, at the axis of symmetry of the problem, which we call the optical axis. However, in the vicinity of this axis the symmetry of the problem is recovered, and the result no longer depends on the azimuthal angle that defines the plane in question.

To make a long story short, this allows us to reuse our previous results, by introducing the additional angle β, which determines, among other things, the additional distance (compared to parallel rays of light coming from infinity) that these light rays travel before meeting at the optical axis.

This is what our latest paper describes, in full detail.

 Posted by at 9:10 pm
May 312019
 

Here is a thought that has been bothering me for some time.

We live in a universe that is subject to accelerating expansion. Galaxies that are not bound gravitationally to our Local Group will ultimately vanish from sight, accelerating away until the combination of distance and increasing redshift will make their light undetectable by any imaginable instrument.

Similarly, accelerating expansion means that there will be a time in the very distant future when the cosmic microwave background radiation itself will become completely undetectable by any conceivable technological means.

In this very distant future, the Local Group of galaxies will have merged already into a giant elliptical galaxy. Much of this future galaxy will be dark, as most stars would have run out of fuel already.

But there will still be light. Stars would still occasionally form. Some dwarf stars will continue to shine for trillions of years, using their available fuel at a very slow rate.

Which means that civilizations might still emerge, even in this unimaginably distant future.

And when they do, what will they see?

They will see themselves as living in an “island universe” in an otherwise empty, static cosmos. In short, precisely the kind of cosmos envisioned by many astronomers in the early 1920s, when it was still popular to think of the Milky Way as just such an island universe, not yet recognizing that many of the “spiral nebulae” seen through telescopes are in fact distant galaxies just as large, if not larger, than the Milky Way.

But these future civilizations will see no such nebulae. There will be no galaxies beyond their “island universe”. No microwave background either. In fact, no sign whatsoever that their universe is evolving, changing with time.

So what would a scientifically advanced future civilization conclude? Surely they would still discover general relativity. But would they believe its predictions of an expanding cosmos, despite the complete lack of evidence? Or would they see that prediction as a failure of the theory, which must be remedied?

In short, how would they ever come into possession of the knowledge that their universe was once young, dense, and full of galaxies, not to mention background radiation?

My guess is that they won’t. They will have no observational evidence, and their theories will reflect what they actually do see (a static, unchanging island universe floating in infinite, empty space).

Which raises the rather unnerving, unpleasant question: To what extent exist already features in our universe that are similarly unknowable, as they can no longer be detected by any conceivable instrumentation? Is it, in fact, possible to fully understand the physics of the universe, or are we already doomed to never being able to develop a full picture?

I find this question surprisingly unnerving and depressing.

 Posted by at 1:37 am
Jun 272018
 

A while back, I wrote about the uncanny resemblance between the interstellar asteroid ‘Oumuamua and the fictitious doomsday weapon Iilah in A. E. van Vogt’s 1948 short story Dormant.

And now I am reading that Iilah’s, I mean, ‘Oumuamua’s trajectory changed due to non-gravitational forces. The suspect is comet-like outgassing, but observations revealed no gas clouds, so it is a bit of a mystery.

Even if this is purely a natural phenomenon (and I firmly believe that it is, just in case it needs to be said) it is nonetheless mind-blowingly fascinating.

 Posted by at 11:59 pm
May 292018
 

There is an excellent diagram accompanying an answer on StackExchange, and I’ve been meaning to copy it here, because I keep losing the address.

The diagram summarizes many measures of cosmic expansion in a nice, compact, but not necessarily easy-to-understand form:

So let me explain how to read this diagram. First of all, time is going from bottom to top. The thick horizontal black line represents the moment of now. Imagine this line moving upwards as time progresses.

The thick vertical black line is here. So the intersection of the two thick black lines in the middle is the here-and-now.

Distances are measured in terms of the comoving distance, which is basically telling you how far a distant object would be now, if you had a long measuring tape to measure its present-day location.

The area shaded red (marked “past light cone”) is all the events that happened in the universe that we could see, up to the moment of now. The boundary of this area is everything in this universe from which light is reaching us right now.

So just for fun, let us pick an object at a comoving distance of 30 gigalightyears (Gly). Look at the dotted vertical line corresponding to 30 Gly, halfway between the 20 and 40 marks (either side, doesn’t matter.) It intersects the boundary of the past light cone when the universe was roughly 700 million years old. Good, there were already young galaxies back then. If we were observing such a galaxy today, we’d be seeing it as it appeared when the universe was 700 million years old. Its light would have spent 13.1 billion years traveling before reaching our instruments.

Again look at the dotted vertical line at 30 Gly and extend it all the way to the “now” line. What does this tell you about this object? You can read the object’s redshift (z) off the diagram: its light is shifted down in frequency by a factor of about 9.

You can also read the object’s recession velocity, which is just a little over two times the vacuum speed of light. Yes… faster than light. This recession velocity is based on the rate of change of the scale factor, essentially the Hubble parameter times the comoving distance. The Doppler velocity that one would deduce from the object’s redshift yields a value less than the vacuum speed of light. (Curved spacetime is tricky; distances and speeds can be defined in various ways.)

Another thing about this diagram is that in addition to the past, it also sketches the future, taking into account the apparent accelerating expansion of the universe. Notice the light red shaded area marked “event horizon”. This area contains everything that we will be able to see at our present location, throughout the entire history of the universe, all the way to the infinite future. Things (events) outside this area will never be seen by us, will never influence us.

Note how the dotted line at 30 Gly intersects this boundary when the universe is about 5 billion years old. Yes, this means that we will only ever see the first less than 5 billion years of existence of a galaxy at a comoving distance of 30 Gly. Over time, light from this galaxy will be redshifted ever more, until it eventually appears to “freeze” and disappears from sight, never appearing to become older than 5 billion years.

Notice also how the dashed curves marking constant values of redshift bend inward, closer and closer to the “here” location as we approach the infinite future. This is a direct result of accelerating expansion: Things nearer and nearer to us will be caught up in the expansion, accelerating away from our location. Eventually this will stop, of course; cosmic acceleration will not rip apart structures that are gravitationally bound. But we will end up living in a true “island universe” in which nothing is seen at all beyond the largest gravitationally bound structure, the local group of galaxies. Fortunately that won’t happen anytime soon; we have many tens of billions of years until then.

Lastly, the particle horizon (blue lines) essentially marks the size of the visible part of the universe at any given time. Notice how the width of the interval marked by the intersection of the now line and the blue lines is identical to the width of the past light cone at the bottom of this diagram. Notice also how the blue lines correspond to infinite redshift.

As I said, this diagram is not an easy read but it is well worth studying.

 Posted by at 8:35 pm
Mar 102018
 

There is a very interesting concept in the works at NASA, to which I had a chance to contribute a bit: the Solar Gravitational Telescope.

The idea, explained in this brand new NASA video, is to use the bending of light by the Sun to form an image of distant objects.

The resolving power of such a telescope would be phenomenal. In principle, it is possible to use it to form a megapixel-resolution image of an exoplanet as far as 100 light years from the Earth.

The technical difficulties are, however, challenging. For starters, a probe would need to be placed at least 550 astronomical units (about four times the distance to Voyager 1) from the Sun, precisely located to be on the opposite side of the Sun relative to the exoplanet. The probe would then have to mimic the combined motion of our Sun (dragged about by the gravitational pull of planets in the solar system) and the exoplanet (orbiting its own sun). Light from the Sun will need to be carefully blocked to ensure that we capture light from the exoplanet with as little noise as possible. And each time the probe takes a picture of the ring of light (the Einstein ring) around the Sun, it will be the combined light of many adjacent pixels on the exoplanet. The probe will have traverse a region that is roughly a kilometer across, taking pictures one pixel at a time, which will need to be deconvoluted. The fact that the exoplanet itself is not constant in appearance (it will go through phases of illumination, it may have changing cloud cover, perhaps even changes in vegetation) further complicates matters. Still… it can be done, and it can be accomplished using technology we already have.

By its very nature, it would be a very long duration mission. If such a probe was launched today, it would take 25-30 years for it to reach the place where light rays passing on both sides of the Sun first meet and thus the focal line begins. It will probably take another few years to collect enough data for successful deconvolution and image reconstruction. Where will I be 30-35 years from now? An old man (or a dead man). And of course no probe will be launched today; even under optimal circumstances, I’d say we’re at least a decade away from launch. In other words, I have no chance of seeing that high-resolution exoplanet image unless I live to see (at least) my 100th birthday.

Still, it is fun to dream, and fun to participate in such things. Though now I better pay attention to other things as well, including things that, well, help my bank account, because this sure as heck doesn’t.

 Posted by at 12:59 pm
Dec 152017
 

The Internet (or at least, certain corners of the Internet where conspiracy theories thrive) is abuzz with speculation that the extrasolar asteroid ‘Oumuamua, best known, apart from its hyperbolic trajectory, for its oddly elongated shape, may be of artificial, extraterrestrial origin.

Some mention the similarity between ‘Oumuamua and Arthur C. Clarke’s extraterrestrial generational ship Rama, forgetting that Rama was a ship 50 kilometers in length, an obviously engineered cylinder, not a rock.

But then… I suddenly remembered that there was another artificial object of extrasolar origin in the science-fiction literature. It is Iilah, from A. E. van Vogt’s 1948 short story Dormant. Iilah is not discovered in orbit; rather, it lays dormant on the ocean floor for millions of years until it is awakened by the feeble radioactivity of isotopes that appear in the ocean as a result of the use and testing of nuclear weapons.

Iilah climbs out of the sea and is thus discovered. It becomes an object of study by a paranoid military, which ultimately decides to destroy it using a nuclear weapon.

Unfortunately, the energy of the explosion achieves the exact opposite: instead of destroying Iilah, it fully awakens it, making it finally remember its original purpose. Iilah then sets itself up for a tremendous explosion that knocks the Earth out of orbit, ultimately causing it to fall into the Sun, turning the Sun into a nova. Why? Because Iilah was programmed to do this. Because “robot atom bombs do not make up their own minds.”

Artist’s impression of ‘Oumuamua

So here is the thing… the Iilah of van Vogt’s story had almost the exact same dimensions (it was about 400 feet in length) and appearance (a rock, like rough granite, with streaks of pink) as ‘Oumuamua.

Go figure.

 Posted by at 10:15 pm