Astronomy

Will LSST make a significant increase in the rate of astronomical event alerts?

Will LSST make a significant increase in the rate of astronomical event alerts?


We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

The NPR news article and podcast New Telescope Promises To Revolutionize Astronomy updates the status of the "Large Synoptic Survey Telescope under construction on Cerro Pachón in Chile".

NPR's Joe Palca's piece includes sound bites from astronomers, including Caltech astronomer Mansi Kasliwal:

PALCA: Kasliwal says although LSST will detect these events, other telescopes are better suited to study them in detail. So the plan is to send out an alert to other telescopes when LSST sees something interesting. Of course, that means the other telescope has to drop what it was doing, but Kasliwal says it will be worth it.

Isn't the LSST expected to "see something interesting" quite often?

Due to the size and scope of LSST's surveying capability which is due to it's huge corrected field of view and huge focal plane array and huge image processing and event detection capacity, there is a potential for a significant increase in the rate of alerts generated and sent to observatories and to astronomers' cell phones (one notable example of such things).

Question: Will LSST make a significant increase in the rate of astronomical event alerts? Has there been an estimate of how the overall rate will change when it comes online?

I'm wondering if astronomers will get woken up more often by their phones, or if some observatories will have to make the decision to promptly change observing schedules or not much more frequently.


Yes. The estimates are that LSST will produce about 10 million alerts per night (LSST Alert Distribution presentation) which will be at least a factor of 5x greater than the amount coming from ZTF currently. ZTF is an approximately 10% scale model of what the LSST alert stream will look as there is about 5x fewer alerts and the alert packets contain about 50-60% of the information that will be in the LSST alerts. The ZTF alert stream (Patterson et al. 2019) also uses the same software (Apache Avro, Spark and Kafka) as LSST will use, just running on less hardware.

These alerts (in Avro serialized binary data format) will not be sent directly to astronomers (and certainly not to cell phones), the plan is to send the alerts to a few "brokers" that will do additional filtering, classification and cross-matching to other catalogs to give additional information and context to the alert (which is basically a "this source on the sky changed in brightness by X amount or appeared"). There are several brokers in various stages of development of which the leading examples are ANTARES and Lasair.

Astronomers, or more likely their software systems, will subscribe to these brokers and add filters for the alerts to subset the stream down to the particular types of objects and science that interest them. These will most likely go into their own databases of interesting targets and astronomers or their software will decide on the small number of most interesting ones that will be sent out to other telescopes, some robotic for rapid response science case, some traditionally scheduled. The large number of alerts and triggers for follow-up has the potential to be very disruptive to the prior way of operating observatories, which is one of the reasons why we are working on system like the Astronomical Event Observatory Network (AEON) to handle and coordinate this.


Astroinformatics: data-oriented astronomy research and education

The growth of data volumes in science is reaching epidemic proportions. Consequently, the status of data-oriented science as a research methodology needs to be elevated to that of the more established scientific approaches of experimentation, theoretical modeling, and simulation. Data-oriented scientific discovery is sometimes referred to as the new science of X-Informatics, where X refers to any science (e.g., Bio-, Geo-, Astro-) and informatics refers to the discipline of organizing, describing, accessing, integrating, mining, and analyzing diverse data resources for scientific discovery. Many scientific disciplines are developing formal sub-disciplines that are information-rich and data-based, to such an extent that these are now stand-alone research and academic programs recognized on their own merits. These disciplines include bioinformatics and geoinformatics, and will soon include astroinformatics. We introduce Astroinformatics, the new data-oriented approach to 21st century astronomy research and education. In astronomy, petascale sky surveys will soon challenge our traditional research approaches and will radically transform how we train the next generation of astronomers, whose experiences with data are now increasingly more virtual (through online databases) than physical (through trips to mountaintop observatories). We describe Astroinformatics as a rigorous approach to these challenges. We also describe initiatives in science education (not only in astronomy) through which students are trained to access large distributed data repositories, to conduct meaningful scientific inquiries into the data, to mine and analyze the data, and to make data-driven scientific discoveries. These are essential skills for all 21st century scientists, particularly in astronomy as major new multi-wavelength sky surveys (that produce petascale databases and image archives) and grand-scale simulations (that generate enormous outputs for model universes, such as the Millennium Simulation) become core research components for a significant fraction of astronomical researchers.

This is a preview of subscription content, access via your institution.


Introduction¶

Current performance requirements on the LSST alert system expect to distribute at minimum nAlertVisitAvg = 10,000 alert events every 39 seconds, with a stretch goal of supporting 100,000 per visit. This minimum averages to

250 alerts per second, though may be transmitted at a higher, much more bursty rate, compared to the current VOEvent rate of

1 alert per minute. The LSST alerts are planned to contain a significant amount of information in each alert event packet, including individual event measurements made on the difference image, a measure of the “spuriousness” of the event, a limited history of previous observations of the object associated with the event if known, characteristics of the variability of the object’s lightcurve, the IDs and distances to nearby known objects, and cutout images of both the difference image and the template that was subtracted. The experiments here use a template alert event packet as described below with very limited content (e.g. no history) with a size of 136 KB, primarily dominated by the 90 KB size of the two cutout images in FITS format. A full alert packet with all event measurements and a history of previous observations will be larger. On the receiving end of the alert distribution system will be community brokers and a limited filtering service mini-broker. The LSST filtering service is expected to be able to support numBrokerUsers = 100 simultaneous connected users each receiving at most numBrokerAlerts = 20 full alerts per visit. In order to meet LSST’s alert stream needs, the alert distribution system must be able to scale to support the expected volume of the alert stream, to support the number of connections, and to allow filtering capabilities that can be made user friendly with simple Python or SQL-like language. Here we test the scalability of a preliminary mock alert distribution system testbed.


Contents

In June 2019, the renaming of the Large Synoptic Survey Telescope (LSST) into the Vera C. Rubin Observatory was initiated by Eddie Bernice Johnson and Jenniffer González-Colón. [22] The renaming was enacted into law on December 20, 2019. [23] The official renaming was announced at the 2020 American Astronomical Society winter meeting. [12] The observatory is named after Vera C. Rubin. The name honors Rubin and her colleagues' legacy to probe the nature of dark matter by mapping and cataloging billions of galaxies through space and time. [22]

The telescope will be named the Simonyi Survey Telescope, to acknowledge the private donors Charles and Lisa Simonyi. [24]

The LSST is the successor to a long tradition of sky surveys. [25] These started as visually compiled catalogs in the 18th century, such as the Messier catalog. This was replaced by photographic surveys, starting with the 1885 Harvard Plate Collection, the National Geographic Society – Palomar Observatory Sky Survey, and others. By about 2000, the first digital surveys, such as the Sloan Digital Sky Survey (SDSS), began to replace the photographic plates of the earlier surveys.

LSST evolved from the earlier concept of the Dark Matter Telescope, [26] mentioned as early as 1996. [27] The fifth decadal report, Astronomy and Astrophysics in the New Millennium, was released in 2001, [28] and recommended the "Large-Aperture Synoptic Survey Telescope" as a major initiative. Even at this early stage the basic design and objectives were set:

The Simonyi Survey Telescope is a 6.5-m-class optical telescope designed to survey the visible sky every week down to a much fainter level than that reached by existing surveys. It will catalog 90 percent of the near-Earth objects larger than 300 m and assess the threat they pose to life on Earth. It will find some 10,000 primitive objects in the Kuiper Belt, which contains a fossil record of the formation of the solar system. It will also contribute to the study of the structure of the universe by observing thousands of supernovae, both nearby and at large redshift, and by measuring the distribution of dark matter through gravitational lensing. All the data will be available through the National Virtual Observatory (see below under "Small Initiatives"), providing access for astronomers and the public to very deep images of the changing night sky.

Early development was funded by a number of small grants, with major contributions in January 2008 by software billionaires Charles and Lisa Simonyi and Bill Gates of $20- and $10 million respectively. [29] [24] $7.5 million was included in the U.S. President's FY2013 NSF budget request. [30] The Department of Energy is funding construction of the digital camera component by the SLAC National Accelerator Laboratory, as part of its mission to understand dark energy. [31]

In the 2010 decadal survey, LSST was ranked as the highest-priority ground-based instrument. [32]

NSF funding for the rest of construction was authorized as of 1 August 2014. [16] The camera is separately funded by the Department of Energy. The lead organizations are: [31]

  • The SLAC National Accelerator Laboratory to design and construct the LSST camera
  • The National Optical Astronomy Observatory to provide the telescope and site team
  • The National Center for Supercomputing Applications to construct and test the archive and data access center
  • The Association of Universities for Research in Astronomy is responsible for overseeing the LSST construction.

As of November 2016 [update] , the project critical path was the camera construction, integration and testing. [33]

In May 2018, Congress surprisingly appropriated much more funding than the telescope had asked for, in hopes of speeding up construction and operation. Telescope management was thankful but unsure this would help, since at the late stage of construction they were not cash-limited. [34]

The Simonyi Survey Telescope design is unique among large telescopes (8 m-class primary mirrors) in having a very wide field of view: 3.5 degrees in diameter, or 9.6 square degrees. For comparison, both the Sun and the Moon, as seen from Earth, are 0.5 degrees across, or 0.2 square degrees. Combined with its large aperture (and thus light-collecting ability), this will give it a spectacularly large etendue of 319 m 2 ∙degree 2 . [6] This is more than three times the etendue of best existing telescopes, the Subaru Telescope with its Hyper Suprime Camera, [35] and Pan-STARRS, and more than an order of magnitude better than most large telescopes. [36]

Optics Edit

The Simonyi Survey Telescope is the latest in a long line of improvements giving telescopes larger fields of view. The earliest reflecting telescopes used spherical mirrors, which although easy to fabricate and test, suffer from spherical aberration a very long focal length was needed to reduce spherical aberration to a tolerable level. Making the primary mirror parabolic removes spherical aberration on-axis, but the field of view is then limited by off-axis coma. Such a parabolic primary, with either a prime or Cassegrain focus, was the most common optical design up through the Hale telescope in 1949. After that, telescopes used mostly the Ritchey–Chrétien design, using two hyperbolic mirrors to remove both spherical aberration and coma, leaving only astigmatism, and giving a wider useful field of view. Most large telescopes since the Hale use this design—the Hubble and Keck telescopes are Ritchey–Chrétien, for example. LSST will use a three-mirror anastigmat to cancel astigmatism: three non-spherical mirrors. The result is sharp images over a very wide field of view, but at the expense of light-gathering power due to the large tertiary mirror. [9]

The telescope's primary mirror (M1) is 8.4 meters (28 ft) in diameter, the secondary mirror (M2) is 3.4 meters (11.2 ft) in diameter, and the tertiary mirror (M3), inside the ring-like primary, is 5.0 meters (16 ft) in diameter. The secondary mirror is expected to be the largest convex mirror in any operating telescope, until surpassed by the ELT's 4.2 m secondary c. 2024 . The second and third mirrors reduce the primary mirror's light-collecting area to 35 square meters (376.7 sq ft), equivalent to a 6.68-meter-diameter (21.9 ft) telescope. [6] Multiplying this by the field of view produces an étendue of 336 m 2 ∙degree 2 the actual figure is reduced by vignetting. [37]

The primary and tertiary mirrors (M1 and M3) are designed as a single piece of glass, the "M1M3 monolith". Placing the two mirrors in the same location minimizes the overall length of the telescope, making it easier to reorient quickly. Making them out of the same piece of glass results in a stiffer structure than two separate mirrors, contributing to rapid settling after motion. [9]

The optics includes three corrector lenses to reduce aberrations. These lenses, and the telescope's filters, are built into the camera assembly. The first lens at 1.55 m diameter is the largest lens ever built, [38] and the third lens forms the vacuum window in front of the focal plane. [37]

Camera Edit

A 3.2-gigapixel prime focus [note 1] digital camera will take a 15-second exposure every 20 seconds. [6] Repointing such a large telescope (including settling time) within 5 seconds requires an exceptionally short and stiff structure. This in turn implies a very small f-number, which requires very precise focusing of the camera. [39]

The 15-second exposures are a compromise to allow spotting both faint and moving sources. Longer exposures would reduce the overhead of camera readout and telescope re-positioning, allowing deeper imaging, but then fast moving objects such as near-Earth objects would move significantly during an exposure. [40] Each spot on the sky is imaged with two consecutive 15 second exposures, to efficiently reject cosmic ray hits on the CCDs. [41]

The camera focal plane is flat, 64 cm in diameter. The main imaging is performed by a mosaic of 189 CCD detectors, each with 16 megapixels. [42] They are grouped into a 5×5 grid of "rafts", where the central 21 rafts contain 3×3 imaging sensors, while the four corner rafts contain only three CCDs each, for guiding and focus control. The CCDs provide better than 0.2 arcsecond sampling, and will be cooled to approximately −100 °C (173 K) to help reduce noise. [43]

The camera includes a filter located between the second and third lenses, and an automatic filter-changing mechanism. Although the camera has six filters (ugrizy) covering 330 to 1080 nm wavelengths, [44] the camera's position between the secondary and tertiary mirrors limits the size of its filter changer. It can only hold five filters at a time, so each day one of the six must be chosen to be omitted for the following night. [45]

Image data processing Edit

Allowing for maintenance, bad weather and other contingencies, the camera is expected to take over 200,000 pictures (1.28 petabytes uncompressed) per year, far more than can be reviewed by humans. Managing and effectively analyzing the enormous output of the telescope is expected to be the most technically difficult part of the project. [47] [48] In 2010, the initial computer requirements were estimated at 100 teraflops of computing power and 15 petabytes of storage, rising as the project collects data. [49] By 2018, estimates had risen to 250 teraflops and 100 petabytes of storage. [50]

Once images are taken, they are processed according to three different timescales, prompt (within 60 seconds), daily, and annually. [51]

The prompt products are alerts, issued within 60 seconds of observation, about objects that have changed brightness or position relative to archived images of that sky position. Transferring, processing, and differencing such large images within 60 seconds (previous methods took hours, on smaller images) is a significant software engineering problem by itself. [52] Approximately 10 million alerts will be generated per night. [53] Each alert will include the following: [54] : 22

  • Alert and database ID: IDs uniquely identifying this alert
  • The photometric, astrometric, and shape characterization of the detected source
  • 30×30 pixel (on average) cut-outs of the template and difference images (in FITS format)
  • The time series (up to a year) of all previous detections of this source
  • Various summary statistics ("features") computed of the time series

There is no proprietary period associated with alerts—they are available to the public immediately, since the goal is to quickly transmit nearly everything LSST knows about any given event, enabling downstream classification and decision making. LSST will generate an unprecedented rate of alerts, hundreds per second when the telescope is operating. [note 2] Most observers will be interested in only a tiny fraction of these events, so the alerts will be fed to "event brokers" which forward subsets to interested parties. LSST will provide a simple broker, [54] : 48 and provide the full alert stream to external event brokers. [55] The Zwicky Transient Facility will serve as a prototype of LSST system, generating 1 million alerts per night. [56]

Daily products, released within 24 hours of observation, comprise the images from that night, and the source catalogs derived from difference images. This includes orbital parameters for Solar System objects. Images will be available in two forms: Raw Snaps, or data straight from the camera, and Single Visit Images, which have been processed and include instrumental signature removal (ISR), background estimation, source detection, deblending and measurements, point spread function estimation, and astrometric and photometric calibration. [57]

Annual release data products will be made available once a year, by re-processing the entire science data set to date. These include:

  • Calibrated images
  • Measurements of positions, fluxes, and shapes
  • Variability information
  • A compact description of light curves
  • A uniform reprocessing of the difference-imaging-based prompt data products
  • A catalog of roughly 6 million Solar Systems objects, with their orbits
  • A catalog of approximately 37 billion sky objects (20 billion galaxies and 17 billion stars), each with more than 200 attributes [50]

The annual release will be computed partially by NCSA, and partially by IN2P3 in France. [58]

LSST is reserving 10% of its computing power and disk space for user generated data products. These will be produced by running custom algorithms over the LSST data set for specialized purposes, using Application Program Interfaces (APIs) to access the data and store the results. This avoids the need to download, then upload, huge quantities of data by allowing users to use the LSST storage and computation capacity directly. It also allows academic groups to have different release policies than LSST as a whole.

An early version of the LSST image data processing software is being used by the Subaru telescope's Hyper Suprime-Cam instrument, [59] a wide-field survey instrument with a sensitivity similar to LSST but one fifth the field of view: 1.8 square degrees versus the 9.6 square degrees of LSST.

LSST will cover about 18,000 deg 2 of the southern sky with 6 filters in its main survey, with about 825 visits to each spot. The 5σ (SNR greater than 5) magnitude limits are expected to be r<24.5 in single images, and r<27.8 in the full stacked data. [60]

The main survey will use about 90% of the observing time. The remaining 10% will be used to obtain improved coverage for specific goals and regions. This includes very deep (r ∼ 26) observations, very short revisit times (roughly one minute), observations of "special" regions such as the Ecliptic, Galactic plane, and the Large and Small Magellanic Clouds, and areas covered in detail by multi-wavelength surveys such as COSMOS and the Chandra Deep Field South. [41] Combined, these special programs will increase the total area to about 25,000 deg 2 . [6]

Particular scientific goals of the LSST include: [61]

  • Studying dark energy and dark matter by measuring weak gravitational lensing, baryon acoustic oscillations, and photometry of type Ia supernovae, all as a function of redshift. [41]
  • Mapping small objects in the Solar System, particularly near-Earth asteroids and Kuiper belt objects. LSST is expected to increase the number of cataloged objects by a factor of 10–100. [62] It will also help with the search for the hypothesized Planet Nine. [63][64]
  • Detecting transient astronomical events including novae, supernovae, gamma-ray bursts, quasar variability, and gravitational lensing, and providing prompt event notifications to facilitate follow-up.
  • Mapping the Milky Way.

Because of its wide field of view and high sensitivity, LSST is expected to be among the best prospects for detecting optical counterparts to gravitational wave events detected by LIGO and other observatories. [65]

It is also hoped that the vast volume of data produced will lead to additional serendipitous discoveries.

NASA has been tasked by the US Congress with detecting and cataloging 90% of the NEO population of size 140 meters or greater. [66] LSST, by itself, is estimated to detect 62% of such objects, [67] and according to the National Academy of Sciences, extending its survey from ten years to twelve would be the most cost-effective way of finishing the task. [68]

Rubin Observatory has a program of Education and Public Outreach (EPO). Rubin Observatory EPO will serve four main categories of users: the general public, formal educators, citizen science principal investigators, and content developers at informal science education facilities. [69] [70] Rubin Observatory will partner with Zooniverse for a number of their citizen science projects. [71]

There have been many other optical sky surveys, some still on-going. For comparison, here are some of the main currently used optical surveys, with differences noted:

  • Photographic sky surveys, such as the National Geographic Society – Palomar Observatory Sky Survey and its digitized version, the Digitized Sky Survey. This technology is obsolete, with much less depth, and in general taken from sites of worse seeing. However, these archives are still used since they span a much larger time interval—more than 100 years in some cases.
  • The Sloan Digital Sky Survey (2000–2009) surveyed 14,555 square degree of the northern hemisphere sky, with a 2.5 meter telescope. It continues to the present day as a spectrographic survey. (2010–present) is an ongoing sky survey using two wide-field 1.8 m Ritchey–Chrétien telescopes located at Haleakala in Hawaii. Until LSST begins operation, it will remain the best detector of near-Earth objects. Its coverage, 30,000 square degrees, is comparable to what LSST will cover.
  • The DESI Legacy Imaging Surveys (2013–present) looks at 14,000 square degrees of the northern and southern sky with the Bok 2.3-m telescope, the 4-meter Mayall telescope and the 4-meter Victor M. Blanco Telescope. The Legacy Surveys make use of the Mayall z-band Legacy Survey, the Beijing-Arizona Sky Survey and the Dark Energy Survey. The Legacy Surveys avoided the Milky Way since it was primarily concerned with distant galaxies. [72] The area of DES (5,000 square degrees) is entirely contained within the anticipated survey area of LSST in the southern sky. [73] (2014–present) is an ongoing space-based survey of the entire sky, whose primary goal is extremely precise astrometry of a billion stars and galaxies. Its limited collecting area (0.7 m 2 ) means it cannot see objects as faint as other surveys, but its locations are far more precise.
  • The Zwicky Transient Facility (2018–present) is a similar rapid wide-field survey to detect transient events. The telescope has an even larger field of view (47 square degrees 5× the field), but a significantly smaller aperture (1.22 m 1/30 the area). It is being used to develop and test the LSST automated alert software.
  • The Space Surveillance Telescope (planned 2022) is a similar rapid wide-field survey telescope used primarily for military applications, with secondary civil applications including space debris and NEO detection and cataloguing.

The Cerro Pachón site was selected in 2006. The main factors were the number of clear nights per year, seasonal weather patterns, and the quality of images as seen through the local atmosphere (seeing). The site also needed to have an existing observatory infrastructure, to minimize costs of construction, and access to fiber optic links, to accommodate the 30 terabytes of data LSST will produce each night. [74]

As of February 2018, construction was well underway. The shell of the summit building is complete, and 2018 saw the installation of major equipment, including HVAC, the dome, mirror coating chamber, and the telescope mount assembly. It also saw the expansion of the AURA base facility in La Serena and the summit dormitory shared with other telescopes on the mountain. [53]

By February 2018, the camera and telescope shared the critical path. The main risk was deemed to be whether sufficient time was allotted for system integration. [75]

The project remains within budget, although the budget contingency is tight. [53]

In March 2020, work on the summit facility, and the main camera at SLAC, was suspended due to the COVID-19 pandemic, though work on software continues. [76] During this time, the commissioning camera arrived at the base facility and is being tested there. It will be moved to the summit when it is safe to do so. [77]

Mirrors Edit

The primary mirror, the most critical and time-consuming part of a large telescope's construction, was made over a 7-year period by the University of Arizona's Steward Observatory Mirror Lab. [78] Construction of the mold began in November 2007, [79] mirror casting was begun in March 2008, [80] and the mirror blank was declared "perfect" at the beginning of September 2008. [81] In January 2011, both M1 and M3 figures had completed generation and fine grinding, and polishing had begun on M3.

The mirror was completed in December 2014. [82] The M3 portion especially suffered from tiny air bubbles which, when they broke the surface, caused "crow's feet" defects in the surface. [83] The bubbles trapped grinding abrasive, which produced scratches a few mm long radiating out from the bubble. Left as-is, these would enlarge the telescope's point spread function, reducing the sensitivity by 3% (to 97% of nominal) and increase the portion of the sky obscured by bright stars from 4% to 4.8% of the survey area. As of January 2015 [update] , the project was exploring ways to fill the holes and scratches and concluded no further polishing was necessary as the mirror surfaces exceeded the structure function requirements.

The mirror was formally accepted on 13 February 2015. [84] [85] It was then placed in the mirror transport box and stored in an airplane hangar [86] until it is integrated with its mirror support. [87] In October 2018, it was moved back to the mirror lab and integrated with the mirror support cell. [88] It went through additional testing in January/February 2019, then was returned to its shipping crate. In March 2019, it was sent by truck to Houston, [89] was placed on a ship for delivery to Chile, [90] and arrived on the summit in May. [91] There it will be re-united with the mirror support cell and coated.

The coating chamber, which will be used to coat the mirrors once they arrive, itself arrived at the summit in November 2018. [88]

The secondary mirror was manufactured by Corning of ultra low expansion glass and coarse-ground to within 40 μm of the desired shape. [4] In November 2009, the blank was shipped to Harvard University for storage [92] until funding to complete it was available. On October 21, 2014, the secondary mirror blank was delivered from Harvard to Exelis (now a subsidiary of Harris Corporation) for fine grinding. [93] The completed mirror was delivered to Chile on December 7, 2018, [88] and was coated in July 2019. [94]

Building Edit

Site excavation began in earnest March 8, 2011, [95] and the site had been leveled by the end of 2011. [96] Also during that time, the design continued to evolve, with significant improvements to the mirror support system, stray-light baffles, wind screen, and calibration screen.

In 2015, a large amount of broken rock and clay was found under the site of the support building adjacent to the telescope. This caused a 6-week construction delay while it was dug out and the space filled with concrete. This did not affect the telescope proper or its dome, whose much more important foundations were examined more thoroughly during site planning. [97] [98]

The building was declared substantially complete in March 2018. [99] As of November 2017, the dome was expected to be complete in August 2018, [53] but in a picture from May 2019 it was still incomplete. [91] The (still incomplete) Rubin Observatory dome first rotated under its own power in 4Q2019. [100]

Telescope Mount Assembly Edit

The telescope mount, and the pier on which it sits, are substantial engineering projects in their own right. The main technical problem is that the telescope must slew 3.5 degrees to the adjacent field and settle within four seconds. [note 3] [101] : 10 This requires a very stiff pier and telescope mount, with very high speed slew and acceleration (10°/sec and 10°/sec 2 , respectively [102] ). The basic design is conventional: an altitude over azimuth mount made of steel, with hydrostatic bearings on both axes, mounted on a pier which is isolated from the dome foundations. However, the LSST pier is unusually large (16 m diameter) and robust (1.25 m thick walls), and mounted directly to virgin bedrock, [101] where care was taken during site excavation to avoid using explosives that would crack it. [98] : 11–12 Other unusual design features are linear motors on the main axes and a recessed floor on the mount. This allows the telescope to extend slightly below the azimuth bearings, giving it a very low center of gravity.

The contract for the Telescope Mount Assembly was signed in August 2014. [103] The TMA passed its acceptance tests in 2018 [88] and arrived at the construction site in September 2019. [104]

Camera Edit

In August 2015, the LSST Camera project, which is separately funded by the U.S. Department of Energy, passed its "critical decision 3" design review, with the review committee recommending DoE formally approve start of construction. [105] On August 31, the approval was given, and construction began at SLAC. [106] As of September 2017, construction of the camera was 72% complete, with sufficient funding in place (including contingencies) to finish the project. [53] By September 2018, the cryostat was complete, the lenses ground, and 12 of the 21 needed rafts of CCD sensors had been delivered. [107] As of September 2020, the entire focal plane was complete and undergoing testing. [108]

Rendering of the LSST camera.

Color-coded cutaway drawing of the LSST camera.

Exploded view of the optical components of the LSST camera.

Before the final camera is installed, a smaller and simpler version (the Commissioning Camera, or ComCam) will be used "to perform early telescope alignment and commissioning tasks, complete engineering first light, and possibly produce early usable science data". [109]

Data Transport Edit

The data must be transported from the camera, to facilities at the summit, to the base facilities, and then to the LSST Data Facility at the National Center for Supercomputing Applications in the United States. [110] This transfer must be very fast (100 Gbit/s or better) and reliable since NCSA is where the data will be processed into scientific data products, including real-time alerts of transient events. This transfer uses multiple fiber optic cables from the base facility in La Serena to Santiago, then via two redundant routes to Miami, where it connects to existing high speed infrastructure. These two redundant links were activated in March 2018 by the AmLight consortium. [111]

Since the data transfer crosses international borders, many different groups are involved. These include the Association of Universities for Research in Astronomy (AURA, Chile and the USA), REUNA [112] (Chile), Florida International University (USA), AmLightExP [111] (USA), RNP [113] (Brazil), and University of Illinois at Urbana–Champaign NCSA (USA), all of which participate in the LSST Network Engineering Team (NET). This collaboration designs and delivers end-to-end network performance across multiple network domains and providers.

A study in 2020 by the European Southern Observatory estimated that up to 30% to 50% of the exposures around twilight with the Rubin Observatory would be severely affected by satellite constellations. Survey telescopes have a large field of view and they study short-lived phenomena like supernova or asteroids, [114] and mitigation methods that work on other telescopes may be less effective. The images would be affected especially during twilight (50%) and at the beginning and end of the night (30%). For bright trails the complete exposure could be ruined by a combination of saturation, crosstalk (far away pixels gaining signal due to the nature of CCD electronics), and ghosting (internal reflections within the telescope and camera) caused by the satellite trail, affecting an area of the sky significantly larger than the satellite path itself during imaging. For fainter trails only a quarter of the image would be lost. [115] A previous study by the Rubin Observatory found an impact of 40% at twilight and only nights in the middle of the winter would be unaffected. [116]

Possible approaches to this problem would be a reduction of the number or brightness of satellites, upgrades to the telescope's CCD camera system, or both. Observations of Starlink satellites showed a decrease of the satellite trail brightness for darkened satellites. This decrease is however not enough to mitigate the effect on wide-field surveys like the one conducted by the Rubin Observatory. [117] Therefore SpaceX is introducing a sunshade on newer satellites, to keep the portions of the satellite visible from the ground out of direct sunlight. The objective is to keep the satellites below 7th magnitude, to avoid saturating the detectors. [118] This limits the problem to only the trail of the satellite and not the whole image. [119]


ASTRONOMY FINAL

The residence assistant assigned to your floor makes a complaint against the two of you.

The wall exerts a force on your room-mate's head and he has a headache.

circular satellite velocity

the tail end of its journey

the rules Newton developed for gravity only hold on Earth, not once you get into space

the Shuttle has an antigravity device on board, developed by NASA

the Shuttle is falling around the Earth (and everything aboard is in free fall)

the Shuttle's gravity balances the Earth's, so that the net gravity is zero

When it runs out of hydrogen to fuse in its core

Each star becomes a red giant at different evolutionary stages in its lifetime.

When it begins fusing hydrogen into helium in its core.

When it explodes in a violent outburst of gas.

interstellar matter in dense clouds collapses into stars and is taken "out of circulation"

gas from outside the Galaxy falls into the Milky Way because of gravity

actually, none of the above is a major contributor to the change

some atoms of gas combine in dusty clouds to make more complex molecules

Rate of fusion would increase by a factor of 8

Rate of fusion would increase by a factor of 4

Rate of fusion would increase by a factor of 2

Rate of fusion would increase by a factor of 16

among the stars at the bottom right of the main sequence

stars with low mass can be located anywhere at all in the H-R diagram

among the stars at the top left of the main sequence

you can't fool me so many stars are on the main sequence that
there is no special stage in a star's life that can be identified with it

forming from a reservoir of cosmic material

fusing hydrogen into helium in their cores

the change of individual atoms into molecules in dusty interstellar clouds

the kinds of orbits short-period comets take around the Sun

exercise equipment they had in the White House for President Obama

the steps in the fusion of hydrogen into helium at the center of stars

radio telescope arrays allow astronomers to make out details on the planets that they have never been able to see before

we have been able to send spacecraft to gather information about planets and moons up close

the Hubble Space Telescope

astronomers today are a lot smarter than astronomers were earlier

they all have solid surfaces with

signs of geological activity on them

the all have one or more moons

they all have thick atmospheres

they all rotate in 24 hours or less

jovians (being larger) rotate significantly more slowly than terrestrials

jovians are made of lighter elements on average than terrestrials

jovians are further from the Sun than terrestrials

jovians have more mass than the terrestrials

have atmospheres much thicker than Earth's

revolve around the Sun in the same direction

have satellites orbiting around them

discovering other galaxies of stars beyond the Milky Way

finding circumstellar disks of material around nearby stars

counting the number of moons around each planet in our own solar system

measuring variations in the amount of snowfall in northern Canada during this century

Jupiter's large gravity immediately attracted all the lighter materials, and so there were few light atoms left by the time the inner planets were ready to form

this is an unsolved problem in astronomy

lighter materials cannot orbit the Sun they would fall in immediately

the Sun is made mostly of rock and metal and the inner planets are closest to the Sun

you can't fool me, spacecraft have visited all the planets in our solar system

Zone with right temperatures for liquid water to exist on a planet's surface

Range of distances for life to exist in, as otherwise it would be impossible outside of it

Area where rocky planets usually form around stars

Is organic biochemistry a common process in the universe?

Is the evolution of intelligence in life a common trend in general?

How has first life originated on Earth?

Why don't we see evidence of technological alien civilizations?

pH levels similar to household bleach

salt levels of about 10 times higher than sea water

Hot temperatures of over +300C

Evidence of ancient organisms has been confirmed in the subsurface of Mars

Liquid water flowed on the surface of early Mars

Habitable conditions existed in at least some areas of early Mars

Atmosphere of Mars used to be thicker in the past than it is now

appear only in our Solar System, making life on Earth possible

are fundamentally different than the atoms making up all other living creatures on Earth

were produced during the formation of Earth itself

were first created in the thermonuclear furnaces of stars

RNA molecule can both store genetic information (like DNA) and do the chemical work of a cell (like proteins)

Oxygen is a by-product of photosynthetic life (bacteria), but oxygen is also deadly to some types of life (bacteria)

Organic building blocks used by Earth life could have come from Earth itself, but they have also been detected in space.

Earth being one of many worlds in our Solar System orbiting the Sun

small rocky planets like Earth being common in our galaxy

our galaxy being one of many galaxies in the universe

Sun being one of many stars in our home galaxy

Herscel was first (in 1785) to correctly conclude that the galaxy is a flattened disk with the Sun near its centre, and Shapley later (in 1917) confirmed both of these statements.

Herscel was first (in 1785) to correctly conclude that the galaxy is a flattened disk, while subsequently (around 1917) Shapley proved the Sun is NOT at its centre.

Herscel made only guesses about the structure of the Milky Way without any telescopic observations available yet then, while Shapley made his claims based on telescopic measurements of the stars in the galaxy.

Galactic mergers change all spiral galaxies into elliptical ones over time.

Amount and distribution of dark matter in the galaxy dictates how it will evolve over time.

All elliptical galaxies evolve into spiral galaxies over time.

They were not ready yet to consider such a radical idea that there is more in the universe than just our own galaxy.

All their observations to date were suggesting that distances to those galaxies were too small, and thus they should be part of our own galaxy.

There is too much dust within our own galaxy to be able to see light from other galaxies outside of it.

majority of stars are old stars

old stars are congregated in the centre of the galaxy

young stars are pushed to the edge of the galaxy

is a bright nebula within our own Milky Way Galaxy

is a nearby irregular galaxy being distorted by gravitational pull from our own galaxy

is a persistent cloud in our atmosphere in the southern sky, making observations of stars difficult for astronomers

Edwin Hubble proposed, correcty, M31 is actually an entire galaxy outside of ours (Andromeda)

Herschel proved that variable stars all reside within the disk of our galaxy

Shapley confirmed that it is indeed a nebular cloud within the disk of the Milky Way

We are trapped within the galaxy itself.

There is too much dust within the galaxy, for us to see it easily.

A big portion of the galaxy is in invisible form (dark matter).

Eight planets (maybe one or two more)

Four Jovian planets making up 99.9% of the planetary mass -- Jupiter, Saturn, Uranus, and Neptune

Four insignificant Terrestial planets -- Mercury, Venus, Earth, and Mars

125+ satellites - mostly around the Jovian planets

Thousands of Asteroids -- most concentrated between the orbits of Mars and Jupiter

Trillions of Comets -- only a few of which occasionally come close to the Sun

Some of these lie in an outer asteroid/cometary belt just beyond Neptune's orbit.


Starlink satellites have already raised concerns about collisions

When launching its first batch of Starlink satellites, SpaceX said it planned to “deorbit” two satellites by using ion engines to move them into Earth’s atmosphere, where they would burn up. This would demonstrate how the company could remove old or broken satellites from orbit, thereby lowering the risk of collisions.

Those satellites had not yet deorbited as of October 27, according to observations by McDowell.

SpaceX and Amazon are also both designing their satellites to be able to avoid collisions through automatic or operator-directed manoeuvres. But even so, a recent Amazon filing to the FCC suggested that if 5% of its satellites fail (or their avoidance systems do), the risk of a collision in a large constellation would be around 6%.

Such a rate is “well beyond what Amazon would view as expected or acceptable,” the company wrote in a letter to the FCC.

NASA A space-debris hit to space shuttle Endeavour’s radiator, found after one of its missions. The entry hole is about 0.25 inches wide, and the exit hole is twice as large.

For its part, SpaceX has seen a 5% failure rate with its first batch of 60 Starlink satellites: Three stopped working after their deployment. (Since that batch of satellites was experimental, Elon Musk had told reporters before the launch that failures were possible.)

Those three defunct satellites will circle the planet until Earth’s gravity pulls them back towards the ground. As they fall into the atmosphere, they will burn up, likely within a year. But until then, they will stay in orbit with no way of communicating with operators on the ground.

“A 5% failure rate is actually better than most historical constellations, particularly for this size,” Brian Weeden, a program director at the space sustainability organisation Secure World Foundation, told Forbes. “But it is definitely not nearly good enough for a very large constellation of hundreds or thousands of satellites. The goal should be a failure rate of at least 1% or lower, and even that will lead to dozens of dead satellites.”

Even SpaceX satellites that aren’t dead have already caused collision problems: On September 2, the European Space Agency (ESA) revealed that it had to move its Aeolus satellite to avoid a possible collision with a SpaceX Starlink satellite.

SpaceX plans to start providing Starlink internet service in the US and Canada after six launches, sometime in 2020. It plans to have “near global coverage” in 2021.

But given the risks for telescope observations and space junk, some astronomers think that pace is too fast.

“I would prefer the licensing authorities required a slower phase-in so we can get experience with a 500-sat[ellite] constellation before going to 5,000 sats and so on,” McDowell said on Twitter.


ABSTRACT

fink is a broker designed to enable science with large time-domain alert streams such as the one from the upcoming Vera C. Rubin Observatory Legacy Survey of Space and Time (LSST). It exhibits traditional astronomy broker features such as automatized ingestion, annotation, selection, and redistribution of promising alerts for transient science. It is also designed to go beyond traditional broker features by providing real-time transient classification that is continuously improved by using state-of-the-art deep learning and adaptive learning techniques. These evolving added values will enable more accurate scientific output from LSST photometric data for diverse science cases while also leading to a higher incidence of new discoveries which shall accompany the evolution of the survey. In this paper, we introduce fink , its science motivation, architecture, and current status including first science verification cases using the Zwicky Transient Facility alert stream.


Abstract

We investigate the ability of the Large Synoptic Survey Telescope (LSST) to discover kilonovae (kNe) from binary neutron star (BNS) and neutron star–black hole (NSBH) mergers, focusing on serendipitous detections in the Wide-Fast-Deep (WFD) survey. We simulate observations of kNe with proposed LSST survey strategies, focusing on cadence choices that are compatible with the broader LSST cosmology programme. If all kNe are identical to GW170817, we find the baseline survey strategy will yield 58 kNe over the survey lifetime. If we instead assume a representative population model of BNS kNe, we expect to detect only 27 kNe. However, we find the choice of survey strategy significantly impacts these numbers and can increase them to 254 and 82 kNe over the survey lifetime, respectively. This improvement arises from an increased cadence of observations between different filters with respect to the baseline. We then consider the detectability of these BNS mergers by the Advanced LIGO/Virgo (ALV) detector network. If the optimal survey strategy is adopted, 202 of the GW170817-like kNe and 56 of the BNS population model kNe are detected with LSST but are below the threshold for detection by the ALV network. This represents, for both models, an increase by a factor greater than 4.5 in the number of detected sub-threshold events over the baseline strategy. These sub-threshold events would provide an opportunity to conduct electromagnetic-triggered searches for signals in gravitational-wave data and assess selection effects in measurements of the Hubble constant from standard sirens, e.g. viewing angle effects.


Will LSST Solve the Mysteries of Dark Matter and Dark Energy? (Kavli Hangout)

During a traditional Chilean stone-laying ceremony, the first building block of a powerful new astronomical observatory, the Large Synoptic Survey Telescope (LSST), was placed in the ground on Cerro Pachón in Chile April 14. Although LSST will not see first light until 2022, the astronomical community is already abuzz about how this ambitious project will open up the "dark universe" of dark matter and dark energy as never before. That mysterious substance and force make up 95 percent of the universe's mass and energy, yet scientists are largely in the dark, as it were, about what they are.

One of the keys to LSST's potential is its 3.2 gigapixel camera, the biggest digital camera slated for construction to date. Another key is LSST's comprehensive sweep of the heavens. Every few days, the telescope will survey the entire Southern Hemisphere's sky. An astounding 30 terabytes of data will be collected nightly. After just a month of scanning the sky, LSST will have observed a greater share of the cosmos than all previous astronomical surveys combined.

On April 2, 2015, two astrophysicists and a theoretical physicist spoke with The Kavli Foundation about how LSST's deep search for dark matter and dark energy will answer fundamental questions about our universe's composition.

Steven Kahn — is the director of LSST and a natural sciences professor in the Kavli Institute for Particle Astrophysics and Cosmology (KIPAC) at Stanford University. He is an experimental astrophysicist with broad interests in instrumentation, observation and theory.

Sarah Bridle — is a professor of astrophysics in the Extragalactic Astronomy and Cosmology research group of the Jodrell Bank Center for Astrophysics in the School of Physics and Astronomy at the University of Manchester. She has served as the project scientist for the United Kingdom's proposal to join LSST and she presently is co-coordinator of the Weak Lensing Working Group of the Dark Energy Survey (DES), a precursor cosmological project to LSST.

Hitoshi Murayama — is the director of the Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU) at the University of Tokyo and a professor at the Berkeley Center for Theoretical Physics at the University of California, Berkeley. His work as a theoretical physicist spans a wide range of topics including particle physics, dark matter and dark energy. Kavli IPMU is a partner in the Hyper Suprime-Cam project, another precursor to LSST.

The following is an edited transcript of their roundtable discussion. The participants have been provided the opportunity to amend or edit their remarks.

The Kavli Foundation: Steven, when the LSST takes its first look at the universe seven years from now, why will this be so exciting to you?

Steven Kahn: In terms of how much light it will collect and its field of view, LSST is about ten times bigger than any other survey telescope either planned or existing. This is important because it will allow us to survey a very large part of the sky relatively quickly and to do many repeated observations of every part of the Southern Hemisphere over ten years. By doing this, the LSST will gather information on an enormous number of galaxies. We'll detect something like 20 billion galaxies.

Sarah Bridle: That's a hundred times as many as we're going to get with the current generation of telescopes, so it's a huge increase. With the data, we're going to be able to make a three-dimensional map of the dark matter in the universe using gravitational lensing. Then we're going to use that to tell us about how the &ldquoclumpiness&rdquo of the universe is changing with time, which is going to tell us about dark energy.

TKF: How does gathering information on billions of galaxies help us learn more about dark energy?

Hitoshi Muryama: Dark energy is accelerating the expansion of the universe and ripping it apart. The questions we are asking are: Where is the universe going? What is its fate? Is it getting completely ripped apart at some point? Does the universe end? Or does it go forever? Does the universe slow down at some point? To understand these questions, it's like trying to understand how quickly the population of a given country is aging. You can't understand the trend of where the country is going just by looking at a small number of people. You have to do a census of the entire population. In a similar way, you need to really look at a vast amount of galaxies so you can understand the trend of where the universe is going. We are taking a cosmic census with LSST.

TKF: The main technique the LSST will use to learn more about dark energy will be gravitational lensing (see sidebar). Dark energy is the mysterious, invisible force that is pushing open and shaping the universe. Can you elaborate on why this is important and how will LSST help realize its full potential?

S.B.: It's extremely difficult to detect the dark energy that seems to be causing our universe to accelerate. Through gravitational lenses, however, it's possible by observing how much dark matter is being pulled together by gravity. And by looking at how much this dark matter clumps up early and later on in the universe, we can see how much the universe is being stretched apart at different times. With LSST, there will be a huge increase in the number of galaxies that we can detect and observe. LSST will also let us identify how far away the galaxies are. This is important. If we want to see how fast the universe is clumping together at different times, we need to know at what time and how far away we're looking.

S.K.: With LSST, we're trying to measure the subtle distortion of the appearance of galaxies caused by clumps of dark matter. We do this by looking for correlations in galaxies' shapes depending on their position with respect to one another. Of course, there's uncertainty associated with that kind of measurement on the relatively small scales of individual galaxies, and the dominant source of that uncertainty is that galaxies have intrinsic shapes—some are spiral-shaped, some are round, and so on, and we are seeing them at different viewing angles, too. Increasing the number of galaxies with LSST makes doing this a far more statistically powerful and thus precise measurement of the effect of gravitational lensing caused by dark matter and how the clumping of dark matter has changed over the universe's history.

LSST will also help address something called cosmic variance. This happens when we're making comparisons of what we see against a statistical prediction of what an ensemble of possible universes might look like. We only live in one universe, so there's an inherent error associated with how good those statistical predictions are of what our universe should look like when applied to the largest scales of great fields of galaxies. The only way to try and statistically beat that cosmic variance down is to survey as much of the sky as possible, and that's the other area where LSST is breaking new ground.

TKF: Will the gravitational lensing observations by LSST be more accurate than anything before?

S.K.: One of the reasons I personally got motivated to work on LSST was because of the difficulty in making the sort of weak lensing measurements that Sarah described.

S.B.: Typically, telescopes distort the images of galaxies by more than the gravitational lensing effect we are trying to measure. And in order to learn about dark matter and dark energy from gravitational lensing, we've got to not just detect the gravitational lensing signal but measure it to about a one-percent accuracy. So we've got to rid of these effects from the optics in the telescope before we can do anything to learn about cosmology.

S.K.: A lot of the initial work in this field has been plagued by issues associated with the basic telescopes and cameras used. It was hard to separate out the cosmic signals that people were looking for from spurious effects that were introduced by the instrumentation. LSST is actually the first telescope that will have ever been built with the notion of doing weak lensing in mind. We have taken great care to model in detail the whole system, from the telescope to the camera to the atmosphere that we are looking through, to understand the particular issues in the system that could compromise weak lensing measurements. That approach has been a clear driver in how we design the facility and how we calibrate it. It's been a big motivation for me personally and for the entire LSST team.

TKF: As LSST reveals the universe's past, will it also help us predict the future of the universe?

H.M.: Yes, it will. Because LSST will survey the sky so quickly and repeatedly, it will show how the universe is changing over time. For example, we will be able to see how a supernova changes from one time period to another. This kind of information should prove extremely useful in deciphering the nature of dark energy , for instance.

S.K.: This is one way LSST will observe changes in the universe and gather information on dark energy beyond gravitational lensing. In fact, the way the acceleration of the universe by dark energy was first discovered in 1998 was through the measurement of what are called Type Ia supernovae. These are exploding stars where we believe we understand the typical intrinsic brightness of the explosion. Therefore, the apparent brightness of a supernova — how faint the supernova appears when we see it — is a clear measure of how far away the object is. That is because objects that are farther away are dimmer than closer objects. By measuring a population of Type Ia supernovae, we can figure out their true distances from us and how those distances have increased over time. Put those two pieces of information together, and that's a way of determining the expansion rate of the universe.

This analysis was done for the initial discovery of the accelerating cosmic expansion with a relatively small number of supernovae — just tens. LSST will measure an enormous number of supernovae, something like 250,000 per year. Only a smaller fraction of those will be very well characterized, but that number is still in the tens of thousands per year. That will be very useful for understanding how our universe has evolved.

TKF: LSST will gather a prodigious amount of data. How will this information be made available to scientists and the public alike for parsing?

S.K.: Dealing with the enormous size of the data base LSST will produce is a challenge. Over its ten-year run, LSST will generate something like a couple hundred petabytes of data, where a petabyte is 10-to-the-15th bytes. That's more data, by a lot, than everything that's ever been written in any language in human history.

The data will be made public to the scientific community largely in the form of catalogs of objects and their properties. But those catalogs can be trillions of lines long. So one of the challenges is not so much how you acquire and store the data, but how do you actually find anything in something that big? It's the needle in the haystack problem. That's where there need to be advances because the current techniques that we use to query catalogs, or to say "find me such and such," they don't scale very well to this size of data. So a lot of new computer science ideas have to be invoked to make that work.

H.M.: One thing that we at Kavli IPMU are pursuing right now is a sort of precursor project to LSST called Hyper Suprime-Cam, using the Subaru Telescope. It's smaller than LSST, but it's trying to do many of the things that LSST is after, like looking for weak gravitational lensing and trying to understand dark energy. We already are facing the challenge of dealing with a large data set. One aspect we would like to pursue at Kavli IPMU, and of course LSST is already doing it, is to get a lot of people in computer science and statistics involved into this. I believe a new area of statistics will be created by the needs of handling these large data sets. It's a sort of fusion, the interdisciplinary aspects of this project. It's a large astronomy survey that will influence other areas of science.

TKF: Are any "citizen science" projects envisioned for LSST, like Galaxy Zoo, a website where astronomy buffs classify the shapes of millions of galaxies imaged by the Sloan Digital Sky Survey?

S.K.: Data will be made available right away. So LSST will in some sense bring the universe home to anybody with a personal computer, who can log on and look at any part of the southern hemisphere's sky at any given time. So there's a tremendous potential there to engage the public not only in learning about science, but actually in doing science and interacting directly with the universe.

We have people involved in LSST that are intimately tied into Galaxy Zoo. We're looking into how to incorporate citizens and crowdsource the science investigations of LSST. One of these investigations is strong gravitational lensing. Sarah has talked about weak gravitational lensing, which is a very subtle distortion to the appearance of the background galaxies. But it turns out if you put a galaxy right behind a concentration of dark matter found in a massive foreground galaxy cluster, then the distortions can get very significant. You can actually see multiple images of the background galaxy in a single image, bent all the way around the foreground galaxy cluster. The detection of those strong gravitational lenses and the analysis of the light patterns you see within them also yields complementary scientific information about cosmological fundamental parameters. But it requires sort of recognizing what is in fact a strong gravitational lensing event, as well as modeling the distribution of dark matter that gives rise to the strength of that particular lensing. Colleagues of Hitoshi and myself have already created a tool to help with this effort, called SpaceWarps (www.spacewarps.org). The tool lets the public look for strong gravitational lenses using data from the Sloan Digital Sky Survey and to play around with dark matter modeling to see if they can get something that looks like the real data.

H.M.: This has been incredibly successful. Scientists have developed computer programs to automatically look for these strongly lensed galaxies, but even an algorithm written by the best scientists can still miss some of these strong gravitationally lensed objects. Regular citizens, however, often manage to find some candidates for the strongly lensed galaxies that the computer algorithm has missed. Not only will this be great fun for people to get involved, it can even help the science as well, especially with a project as large as LSST.

TKF: In the hunt for dark energy's signature on the cosmos, LSST is just one of many current and planned efforts. Sarah, how will LSST observations tie in with the Dark Energy Survey you're working on, and Hitoshi, with will LSST complement the Hyper Suprime-Cam?

S.B.: So the Dark Energy Survey is going to image one-eighth of the whole sky and have 300 million galaxy images. About two years of data have been taken so far, with about three more years to go. We'll be doing maps of dark matter and measurements of dark energy. The preparation for LSST that we are doing via DES will be essential.

H.M.: Hyper Suprime-Cam is similar to the Dark Energy Survey. It's a nearly billion pixel camera looking for nearly 10 million galaxies. Following up on the Hyper Suprime-Cam imaging surveys, we would like to measure what we call spectra from a couple million galaxies.

S.K.: The measurement of spectra as an addition to imaging tells us not only about the structure of matter in the universe but also how much the matter is moving with respect to the overall, accelerating cosmic expansion due to dark energy. Spectra are an additional, very important piece of information in constraining cosmological models.

H.M.: We will identify spectra with an instrument called the Prime Focus Spectrograph, which is scheduled to start operations in 2017 also on the Subaru telescope. We will do very deep exposures to get the spectra on some of these interesting objects, such as galaxies where lensing is taking place and supernovae, which will also allow us to do much more precise measurements on dark energy.

Like the Hyper Suprime-Cam, LSST can only do imaging. So I'm hoping when LSST comes online in the 2020s, we will already have the Prime Focus Spectrograph operational, and we will be able to help each other. LSST's huge amount of data will contain many interesting objects we would like to study with this Prime Focus Spectrograph.

S.K.: All these dark matter and dark energy telescope projects are very complementary to each other. It's because of the scientific importance of these really fundamental pressing questions — what is the nature of dark matter and dark energy? — that the various different funding institutions around the world have been eager to invest in such an array of different complementary projects. I think that's great, and it just shows how important this general problem is.

TKF: Hitoshi, you mentioned earlier the interdisciplinary approach fostered by LSST and projects like it, and you've spoken before about how having different scientific disciplines and perspectives together leads to breakthrough thinking — a major goal of Kavli IPMU. Your primary expertise is in particle physics, but you work on many other areas of physics. Could you describe how observations of the very biggest scales of the dark universe with LSST will inform work on the very smallest, subatomic scales, and vice versa?

H.M.: It's really incredible to think about this point. The biggest thing we can observe in the universe has to have something to do with the smallest things we can think of and all the matter we see around us.

S.B.: It is amazing that you can look at the largest scales and find out about the smallest things.

H.M.: For more than a hundred years, particle physicists have been trying to understand what everything around us is made of. We made huge progress by building a theory called the standard model of particle physics in the 20th century, which is really a milestone of science. Discovering the Higgs boson at the Large Hadron Collider at CERN in 2012 really nailed that the standard model is the right theory about the origin of everything around us. But it turns out that what we see around us is actually making up only five percent of the universe. So there is this feeling among particle physicists of "what have we been doing for a hundred years?" We only have five percent of the universe! We still need to understand the remaining 95 percent of the universe, which is dark matter and dark energy. It's a huge problem and we have no idea what they are really.

A way I explain what dark matter is: It's the mother from whom we got separated at birth. What I mean by this is without dark matter, there's no structure to the universe — no galaxies, no stars—and we wouldn't be here. Dark matter, like a mother, is the reason we exist, but we haven't met her and have never managed to thank her. So that's the reason why we would like to know who she is, how she came to exist and how she shaped us. That's the connection between the science of looking for the fundamental constituents of the universe, which is namely what particle physicists are after, and this largest scale of observation done with LSST.

TKF: Given LSST's vast vista on the Universe, it is frankly expected that the project will turn up the unexpected. Any ideas or speculations on what tracking such a huge portion of the universe might newly reveal?

S.K.: That's sort of like asking, "what are the unknown unknowns?" [laughter]

TKF: Yes — good luck figuring those out!

S.K.: Let me just say, one of the great things about astrophysics is that we have explicit theoretical predictions we're trying to test out by taking measurements of the universe. That approach is more akin to many other areas of experimental physics, like searching for the Higgs boson with the Large Hadron Collider, as Hitoshi mentioned earlier. But there's also this wonderful history in astronomy that every time we build a bigger and better facility, we always find all kinds of new things we never envisioned.

If you go back — unfortunately I'm old enough to remember these days — to the period before the launch of the Hubble Space Telescope, it's interesting to see what people had thought were going to be the most exciting things to do with Hubble. Many of those things were done and they were definitely exciting. But I think what many people felt was the most exciting was the stuff we didn't even think to ask about, like the discovery of dark energy Hubble helped make. So I think a lot of us have expectations of similar kinds of discoveries for facilities like LSST. We will make the measurement we're intending to make, but there will be a whole bunch of other exciting stuff that we never even dreamed of that'll come for free on top.


SpaceX's plan could make astronomy 'impossible'

SpaceX has said it plans to paint its satellites black on the sides that face Earth, making them less reflective. (It did not appear to have done so for the recently launched batch, however.) But even if it did paint them, telescopes like LSST would still be able to detect the satellites' light, Tyson said.

What's more, the black paint would do nothing to cover radio waves.

Satellite communications use wavelengths similar to the ones radio telescopes on the ground use to study objects in space. For now, the radio interference from satellites is manageable for most telescopes. But that would likely change if the number of satellites in orbit grows from the current total of 2,000 to something closer to 50,000.

The National Radio Astronomy Observatory has been talking with SpaceX about switching satellite communications to different radio frequencies, Nature reported.

The American Astronomical Society (AAS) has also been meeting with SpaceX to discuss the impact of its satellites, according to the Times. Lowenthal is one of the experts on that committee.

"So far, they've been quite open and generous with their data," he said. "But they have not made any promises."

Other companies, including Amazon, Telesat, and OneWeb, are also working on plans to launch their own satellite constellations, though none are as expansive as SpaceX's.

"There is a point at which it makes ground-based astronomy impossible to do," astronomer Jonathan McDowell told the Times. "I'm not saying Starlink is that point. But if you just don't worry about it and go another 10 years with more and more mega-constellations, eventually you are going to come to a point where you can't do astronomy anymore."


Radio interference

Radio astronomers face a second set of challenges. They observe the Universe in wavelengths of light that are also used for satellite communications. The use of such frequencies is regulated, but the huge number of planned satellites complicates the situation, says Tony Beasley, director of the US National Radio Astronomy Observatory in Charlottesville, Virginia. As satellites communicate with ground stations, their signals could interfere with radio-astronomy observations, rendering the astronomy data useless.

First private Moon lander heralds new lunar space race

The observatory is talking with SpaceX and OneWeb about the frequencies that those megaconstellations will use for their broadcasts. Companies might decide to shift the frequencies at which they broadcast away from those used for radio astronomy. Another idea is for satellites to temporarily shut off their communications as they pass over radio-astronomy facilities.

A further issue with megaconstellations is that the sheer number of satellites will complicate efforts to manage growing congestion in space. Even if only some of them are eventually launched, it will worsen the space-junk problem. Amazon has estimated that if 1 in 20 of its planned satellites fails, there is a 6% chance of collision with another orbiting object, which would generate more space debris.

The first batch of Starlinks have already caused some congestion. In September, the European Space Agency (ESA) had to manoeuvre its Aeolus wind-mapping satellite out of the way of a Starlink satellite. The Starlinks are supposed to automatically move away from potential collisions, but a communications glitch between ESA and SpaceX meant each didn’t know what the other was doing. The incident highlighted the fact that satellite operators don’t have a universal strategy if two active satellites are on a potential collision course, says Holger Krag, head of ESA’s space-debris office in Darmstadt, Germany.

He and his colleagues are hoping to help develop a global collision-avoidance system that automatically detects potential crashes and orders satellites to move to safer locations. “We would like to see that in two to three years,” Krag says.

But with the megaconstellations already becoming reality, the operators are running out of time.