This change affects internal unit testing only.
It does not affect developers who use Astronomy Engine.
Upgraded the HYG database used for verification of
constellation calculations to v 3.5.1.
See conversation at:
https://github.com/astronexus/HYG-Database/issues/21
The star database changed again, which causes my hash check
to fail. This time I locked on to the specific commit of the
file, so my build process won't break if it is changed again.
The star database file hygdata_v3.csv has been updated.
Updated the expected checksum for it.
Reworked the downloader to check for checksum disagreement.
If checksum doesn't match, delete the file and download,
then try the checksum again.
This change will automatically fix obsolete files that have already
been downloaded on contributor's development systems.
I refactored the unit tests for all the demo programs
to follow a different pattern that makes it simpler
to add more demo tests in the future.
The main thing is that correct output and generated
output are now in separate directories `correct` and `test`.
I have moved the test scripts from `test/test` to `./demotest`
in all the langauge demo directories.
This makes it simpler to clean up any stale generated
files before each test run by `rm -f test/*.txt`.
I stumbled across this while making the Java demo tests,
and it was a better solution, so now all the other languages
are consistent with the Java demo tests.
In the C demo tests, I also decided to compile all the
binary executables into a subdirectory `bin` that can
be cleaned out before each run, to make sure there are
no stale executables from an earlier run.
The sha256sum and md5sum utilities are available by
default on Linux, but not Windows or Mac OS.
I created the script `checksum.py` that can perform
sha256 and md5 checksum verification on all 3 systems.
Got rid of the ugly checksum.bat I was using on Windows.
Deleted the md5 checksum files, since I only need sha256
for now.
Before this change, I was always skipping verification
of downloads on Mac systems.
We are having difficulty getting Kotlin code to build for
both JVM and Native. For now, the priority is to support JVM,
so I am turning off installation of the Kotlin Native compiler.
Nothing very interesting yet.
Just building a very basic Kotlin Native app
to make sure build and execute work on GitHub Actions,
on Linux and Mac OS. I will worry about Windows later.
Restructured the Java code so we pass in command
line arguments to select which demo we want to run.
We will also pass in date/time, latitude/longitude,
or whatever numeric data we need for future demos.
Automated test run of the Java demos from the
unit test suite.
Instead of being executed directly by the GitHub Actions
yml file, the Kotlin build now has been integrated with
the build/test steps for the other 4 languages in the
bash script `generate/run` and the Windows batch file
`generate/run.bat`. This will be necessary to control the
order of execution, because the Kotlin source code will have
to be written by the code generator before it is built
and executed.
I also added hints for myself and other contributors about
how to set up Kotlin/JDK tools on a new developement machine.
These instructions are not needed by most users of Astronomy Engine,
just contributors.
I have gravsim_test.c to the point where it calculates a
standard deviation of error between TOP2013 and Astronomy Engine
for calculating the position of Pluto over 10 worst-case samples.
My baseline is now 0.205303 arcminutes of heliocentric position error.
For Runge-Kutta (or some other method) to be an improvement, it
has to beat that score without incurring significant extra work
or larger memory consumption.
Starting work on support for galatic coordinates.
Generate a test data file using calculations made
by the NOVAS function equ2gal(). Later I will use
this data to verify the conversion functions I
write for Astronomy Engine.
The test build failed because diffcalc reported a small
discrepancy between the C and C# output.
So I made the threshold more lenient for now.
I want to come back later and figure out if I can get back
to exact agreement between C and C# code.
Told wget not to output rediculous progress bar stuff
that eats thousands of lines of log output.
Before making these changes, I had the following discrepancies
between the calculations made by the different programming
language implementations of Astronomy Engine:
C vs C#: 5.55112e-17, worst line number = 6
C vs JS: 2.78533e-12, worst line number = 196936
C vs PY: 1.52767e-12, worst line number = 159834
Now the results are:
Diffing calculations: C vs C#
ctest(Diff): Maximum numeric difference = 5.55112e-17, worst line number = 5
Diffing calculations: C vs JS
ctest(Diff): Maximum numeric difference = 1.02318e-12, worst line number = 133677
Diffing calculations: C vs PY
ctest(Diff): Maximum numeric difference = 5.68434e-14, worst line number = 49066
Diffing calculations: JS vs PY
ctest(Diff): Maximum numeric difference = 1.02318e-12, worst line number = 133677
Here is how I did this:
1. Use new constants HOUR2RAD, RAD2HOUR that directly convert between radians and sidereal hours.
This reduces tiny roundoff errors in the conversions.
2. In VSOP longitude calculations, keep clamping the angular sum to
the range [-2pi, +2pi], to prevent it from accumulating thousands
of radians. This reduces the accumulated error in the final result
before it is fed into trig functions.
The remaining discrepancies are largely because of an "azimuth amplification" effect:
When converting equatorial coordinates to horizontal coordinates, an object near
the zenith (or nadir) has an azimuth that is highly sensitive to the input
equatorial coordinates. A tiny change in right ascension (RA) can cause a much
larger change in azimuth.
I tracked down the RA discrepancy, and it is due to a different behavior
of the atan2 function in C and JavaScript. There are cases where the least
significant decimal digit is off by 1, as if due to a difference of opinion
about rounding policy.
My best thought is to go back and have a more nuanced diffcalc that
applies less strict tests for azimuth values than the other calculated values.
It seems like every other computed quantity is less sensitive, because solar
system bodies tend to stay away from "poles" of other angular coordinate
systems: their ecliptic latitudes and equatorial declinations are usually
reasonably close to zero. Therefore, right ascensions and ecliptic longitudes
are usually insensitive to changes in the cartesian coordinates they
are calculated from.
I want to experiment with truncating the L1.2 series to
sacrifice some accuracy for smaller generated code.
To that end, I implemented the ability to save the
Jupiter moons model after loading it. I added a 'jmopt'
command to the 'generate' program that will do this
optimization. For now, it just loads the model and
saves it back to a different file. Then the code generator
loads from the saved file instead of the original.
This commit verifies that everything is still working,
before I start truncating the series.
Now that I no longer need to generate Chebyshev models
or TOP2013 models for Pluto, I got rid of all the
code in generate.c that is no longer needed.
This whacked about 1000 lines of code.
Generating an embryonic TOP2013 Pluto model, along with the old
Chebyshev resampling model of Pluto, into the Linux and Windows
build processes.
The TOP2013 Pluto model isn't used for anything, and it isn't
optimized properly yet, but at least this helps validate my code
automatically as I go forward.
Adding infrastructure for loading TOP2013 models of planets
and calculating them. Will start with a unit test to verify
I'm calculating the formulas correctly.
I'm starting to work on a replacement for Pluto calculations that
are not bounded in time. I'm trying the TOP2013 model that calculates
elliptic parameters of the outer planets Jupiter..Pluto.
I needed to download the 24MB file TOP2013.dat.
I already had redundant download logic for two files, and this was a third.
So I eliminated the redundancy and generalized the download logic
in the new bash function Download.
Wrote stub C functions for finding transits.
Updated html files containing Espenak test data for Mercury, Venus.
Updated norm.py to convert the html files to easy-to-use text files.
I had to increase certain error tolerances in the unit tests.
Reworked the unit tests to make more sense by waiting until
each language step is done to check against each other.
That way I can run a single language step independently.
I found that lunar eclipse data is available for many centuries.
I downloaded the data for the years 1701..2200.
Wrote norm.py to extract and convert the parts I care about
into a format that will be much easier to parse in the unit
tests for all four languages.
Regenerate the normalized data from the 'run' script.
This way, I have documentation for where the data came from.
I'm using the HYG star database v3 from:
https://github.com/astronexus/HYG-Database
I compare the star constellations it reports against
what I calculate from the star RA/DEC it lists.
When I try this against all stars in the database, I
find 25 disagreements about which constellation contains
the star. Another person found 3 disagreements. See:
https://github.com/astronexus/HYG-Database/issues/21
For now, I'm testing only the stars brighter than mag 4.890,
which eliminates all the disagreements, and still gets me
over 1000 test cases.
Also, now I'm verifying ephemeris file and star database
checksums whether or not they have just been downloaded.
The idea is to catch corruption or unexpected changes
each time I run the unit test.
Decided to move call to makedoc script from run script.
It was confusing that it was hidden inside unit_test_js,
especially because it invokes the code generator for
all supported languages.
Created skeleton test harness for validating the demo programs.
Created stub moonphase.py.
Copied correct demo program outputs from nodejs; will tweak as needed.
Call the Python demo test harness from the 'run' script.