Compare commits

..

7 Commits

Author SHA1 Message Date
Alejandro Martinez
a4e5157d40 Revert "Add ANT+ product info (#3307)" (#3309)
This reverts commit d7fce1565c.
2020-01-24 18:09:56 -03:00
thebaron06
d7fce1565c Add ANT+ product info (#3307)
Add the identification of Garmins HRM-Tri and their Powermeter Vector 3.
2020-01-24 18:03:55 -03:00
Mark Liversedge
525fcb0a66 3.5 BUILD INCREMENT
.. re-issue of 3.5 binaries with Strava API guideline compliance, as
   part of the 'rate limit' requirements.
2020-01-17 20:16:54 +00:00
Mark Liversedge
dc3ce7e365 Connect with Strava
.. the authorise button on the add cloud wizard now
   shows a 'Connect with Strava' icon

.. all other services continue to have a button that
   is labelled 'Authorise'

.. this is needed to comply with the Strava API application
   guidelines.
2020-01-17 17:42:40 +00:00
Mark Liversedge
3133cdab3b Compatible with Strava
.. logo added to the about box, only tested on hi-dpi display
   (may need scaling applied for lower resolution displays).
2020-01-17 17:42:23 +00:00
Mark Liversedge
cc91520e76 View on Strava
.. when data is downloaded from strava we now set the metadata
   tag "StravaID" to the id of the activity on Strava.

.. On RideSummary a link is added at the bottom to view the activity
   on Strava if the "StravaID" is set.

.. if the user clicks on the link the summary is replaced with the
   strava page for the ride:
      e.g. https://www.strava.com/activities/962515512

.. this is part of a couple of updates to comply with the Strava
   guidelines for consumption of the Strava v3 API, see:
      https://developers.strava.com/guidelines/
2020-01-17 17:42:12 +00:00
Mark Liversedge
372dd5c144 SEGV on Overview Chart
.. when no rides available on new user.

Fixes #3295
2020-01-13 20:24:44 +00:00
785 changed files with 99750 additions and 306252 deletions

5
.gitignore vendored
View File

@@ -36,11 +36,6 @@ plugins/
resources/
src/debug/
src/release/
doc/doxygen/latex
doc/doxygen/html
# qt creator builds
build-src*
qwt/src/debug/
qwt/src/release/

View File

@@ -1,20 +1,12 @@
if: commit_message =~ /\[publish binaries\]/
branches:
only:
- master
- /^[vV]\d+\.\d+(\.\d+)?(-\S*)?$/
language: cpp
cache:
directories:
- $HOME/Library/Caches/Homebrew
- qwt
- D2XX
- site-packages
- VLC
os:
- osx
- linux
dist: xenial
language: c++
env:
global:
- BRANCH=master
@@ -36,31 +28,33 @@ env:
- secure: cc0pAJjkmFNw2bO3zVACmtyHTwINAHALrtUxi+nRD+FhOO9KxuxuuwvcKCZKfp9EUOjz5PrYWKV1ZH/zt/jMix8A4Gyue2mWX8WYih7aTmJBcJWsFNTCybnClreKBCh18kHdWWhkmhk8EMINDvlqxzJZGpcNO04gxhL9wuLLrNQ=
- secure: em0xXIm69rMHsHXYQiizeJB7dEFBkX33PsWDHwBNrX6lFBued23eL96KJC4RVbk6A+AHFtXFATrreZ14D5JH/E/37CXhe3X2R93WqiPUSH0s7NI4fFA1BroKUNAlqO4bMqDBidtNmwMPaLTXjaOnOZyvbAG7z+QV3TKC8tOeZDU=
- secure: VFaSERlgsjzjiDQhKw8XFvQrjdvFzHHL7V3NQg+RfELHoT6I1pAGFdl/+lRBIVOiVkbQ6XnpBA28nlf0QydPHElRZdqmh0azQV/bkUXD4ffPE8q0iSqeqhAZ+5L05K5K+Gby/y8TZE4FX6e/7trFL7oq+h9x0gq5RQO8rAcTV84=
- secure: eTSJmS38EsTkI22yvDJLUrBxSyLDwd3pDRsyLQfZ3ThN0UJ9cQN2uB7aLy3OzNYadpi+Axlr46MgG0G5qGV1hHXkf+C4orGkURQWxHA7L5R/oE98TuYMO1bisZu9dJEVbmEM4cehCjbB7DExzxK4m6+oTJsWhVbIwlNh5Poq/v4=
- secure: gUDTEErUOhzkSVofEvdw1jqHHsE/K+/SOqRBKDToaFPhi6XK+Tvu1LqPMjfPdjYLaCSiwc/R79fJrAEuK+7KSwdiLEnDv3RMpRS5g1UWyJ/ZYd5xNR+WiBqUvnY/S/CJokuNw7gBbGq7JCO4pmIGV5YB9FA4Na6MG/eHzTSOIig=
- secure: Qk+gzBLwjrB8abUYzxap10dYSpIeKpB1gqhdoMbqS23G0r1lejnsjutIfReuJGK/efCmhisKN1xIX/InvJWD8z6GsLJFmf3F0oRj7aDJ/X5UIn9Upflje9xgHQafP1FJuzZBWtzandNfPE8EmEOgAQsJZ3c7xBE1SY/6xcJaQTc=
- secure: m4+k3/QcYvqmMoRO8uq3ef2jAO1FWeRVDG/XtlbjBlgmB5OR/zW5c7c1Ywm6IM5yzsi1rRks8GFffZ6gYqXhML10EfGKVbnyBcZZ7HVylNtvxDF68W1BLacChzDs4mGYQSV8kJRGI3EaVNdyFJ5yln/HUZ6qBbQ473MtxprO6BI=
jobs:
include:
- os: osx
osx_image: xcode11.3
compiler: clang
- os: linux
dist: bionic
compiler: gcc
matrix:
include:
- os: osx
osx_image: xcode10.1
compiler: clang
- os: linux
compiler: gcc
before_install:
- travis/$TRAVIS_OS_NAME/before_install.sh
before_script:
- travis/$TRAVIS_OS_NAME/before_script.sh
script:
- travis/$TRAVIS_OS_NAME/script.sh
before_cache:
- if [[ "$TRAVIS_OS_NAME" == "osx" ]]; then brew cleanup; fi
after_success:
- if [[ $TRAVIS_OS_NAME == "osx" ]]; then
export FINAL_NAME=dev-prerelease-branch-master-build-${TRAVIS_BUILD_NUMBER}.dmg;
else
export FINAL_NAME=dev-prerelease-branch-master-build-${TRAVIS_BUILD_NUMBER}.AppImage;
fi
- travis/$TRAVIS_OS_NAME/after_success.sh
deploy:
provider: releases
api_key:
secure: KlfkRM8oGP02y5LhbdxetnhqUG3YzVylvyhT8BTYjdoJtkJr7YXYpdhj9byZ9aiy1gSWI/g7A1X6/P8/McqRtgt4dEYr4Zg8QO7Y7QdTpgNQEwu8ZrkyyG/7b/rSkfFHDjrOAHslLVXuBNwWgi8YW1aTn0rY2AqDbOri7u6tt9Q=
file: src/$FINAL_NAME
skip_cleanup: true
on:
tags: true
repo: GoldenCheetah/GoldenCheetah

View File

@@ -4,79 +4,112 @@
Mark Liversedge
John Ehrlinger
Ale Martinez
Jul 2022
Version 3.6
Jan 2015
Version 1.2
A walkthrough of building GoldenCheetah from scratch on Ubuntu linux 18.04
This walkthrough should be largely the same for any Debian derivative Linux
distro, and very similar for others using their correspoing package manager.
A walkthrough of building GoldenCheetah from scratch on Ubuntu linux. This walkthrough
should be largely the same for any Linux distro.
CONTENTS
1. BASIC INSTALLATION WITH MANDATORY DEPENDENCIES
- git
- flex
- bison
- QT
- OpenGL
- gsl
- git
2. ADDING OPTIONAL DEPENDENCIES
2. ADDING OPTIONAL DEPENDENCIES WHEN BUILDING VERSION 2
- FTDI D2XX
- SRMIO
- liboauth
- libkml
3. ADDING OPTIONAL DEPENDENCIES WHEN BUILDING VERSION 3
- checking out the release 3 branch & building with MANDATORY dependencies
- flex
- bison
- libical - Diary window and CalDAV support (google/mobileme calendar integration)
- libvlc - Video playback in training mode
- libical - Diary window and CalDAV support (external calendar integration)
- libusb - If you want support for using USB2 sticks in Train View
- R - If you want R charts
- Python - If you want Python charts, scripts and data processors
1. BASIC INSTALLATION WITH MANDATORY DEPENDENCIES
=================================================
Install the Linux distribution of choice on amd64 platform (Ubuntu 18.04 is used
for this document). You will not need to do this if you already have a Linux
distribution installed. Left this step in to highlight the Linux distribution
the commands below were executed on.
Installed Linux distribution of choice on platforms i386 or amd-64 (currently Debian-based distributions and Arch-based distributions are covered). You will not need to do this if you
already have a Linux distribution installed. Left this step in to highlight the
Linux distribution the commands below were executed on.
login and open a terminal to get a shell prompt
Install Qt
----------
Download MANDATORY DEPENDENCIES (browser)
-----------------------------------------
Download and install the Qt SDK from http://qt-project.org/
You can use a browser to download and run the interactive installer, be sure to
select version 5.15.2 or higher Qt 5 version, including at least the following modules:
- Desktop gcc 64-bit
- Qt Charts
- Qt WebEngine
Once this step is completed add the bin directory to PATH and test qmake is ok:
$ qmake --version
Once that is completed test qmake is ok with: qmake --version (should report 4.9.8 or higher)
Install git
-----------
DEBIAN-BASED DISTRIBUTION INSTRUCTIONS
--------------------------------------
Install git with:
$ sudo apt-get install git
Said Y to prompt about all git files installed (git-gui et al)
Install FLEX and BISON
----------------------
You will need flex v2.5.9 or later
$ sudo apt-get install bison
$ sudo apt-get install flex
Install Mesa OpenGL utility library
-----------------------------------
sudo apt-get install libglu1-mesa-dev
Install GSL development libraries
---------------------------------
sudo apt-get -qq install libgsl-dev
ARCH-BASED DISTRIBUTION INSTRUCTIONS
------------------------------------
Install git:
$ sudo pacman -S git
INSTALL FLEX and BISON
----------------------
$ sudo pacman -S flex bison
NEXT STEPS
----------
$ vi gcconfig.pri
Ensure you have the following lines (which are now also in gcconfig.pri.in which has
been updated to reflect the new dependencies in version 3)
QMAKE_LEX = flex
QMAKE_YACC = bison
win32 {
QMAKE_YACC = bison --file-prefix=y -t
QMAKE_MOVE = cmd /c move
QMAKE_DEL_FILE = rm -f
}
Build!
------
$ make clean
$ qmake
$ make
To compile translation you need QT tool - lrelease
If it is not found using he defaults in src/src.pro then set the full path and filename in gcconfig.pri
QMAKE_LRELEASE = /usr/bin/lrelease
When build first time you get number of error messages on .qm files missing:
"RCC: Error in 'Resources/application.qrc': Cannot find file 'translations/gc_fr.qm'"
You can ignore these messages for your build. The .qm files will be created during the
build at a later point in time via the "lrelease" command you configured in gcconfig.pri)
If your QT build includes its own local compress libs then you should comment the line below in gcconfig.pri,
otherwise you will need to have the compress libraries installed separately.
#LIBZ_INCLUDE =
#LIBZ_LIBS = -lz
You will now have a release3 binary but with none of the release3 dependencies compiled in.
Get latest GOLDEN CHEETAH source files
--------------------------------------
$ mkdir -p ~/Projects
$ cd ~/Projects
$ mkdir -p ~/Projects/Live
$ cd ~/Projects/Live
$ git clone git://github.com/GoldenCheetah/GoldenCheetah.git
$ cd GoldenCheetah
@@ -88,34 +121,11 @@ $ cd ../src
$ cp gcconfig.pri.in gcconfig.pri
$ vi gcconfig.pri
Uncomment below and configure the location of the GNU scientific library, this is a mandatory dependency.
Comment out the D2XX_INCLUDE and SRMIO_INSTALL lines for now (put # in first character of the line
to comment out), we will install that in a moment, if we need to.
#GSL_INCLUDES = /usr/include
#GSL_LIBS = -lgsl -lgslcblas -lm
Ensure you have the following lines (which are now also in gcconfig.pri.in which has
been updated to reflect the new dependencies in version 3.6)
QMAKE_LEX = flex
QMAKE_YACC = bison
win32 {
QMAKE_YACC = bison --file-prefix=y -t
QMAKE_MOVE = cmd /c move
QMAKE_DEL_FILE = rm -f
}
To compile translation you need the QT tool lrelease
If it is not found using the defaults in src/src.pro then set the full path and
filename in gcconfig.pri, s.t.
QMAKE_LRELEASE = /usr/bin/lrelease
If your QT build doesn't include its own local compress libs then you should uncomment the lines below,
and add the library path to LIBZ_INCLUDE =, you will need to have the compress libraries installed separately.
#LIBZ_INCLUDE =
#LIBZ_LIBS = -lz
compiling with gcc -O3 (tree vectorization can have a significat impact)
[or -Ofast]
If you are building for your local host you may find that you get better performance if
compiling with gcc -O3 (tree vectorization can have a significat impact) [or -Ofast]
If so you might like to uncomment:
@@ -126,43 +136,32 @@ Save and exit
$ cd ..
BUILD WITH BASIC CONFIGURATION
------------------------------
$ qmake -recursive
$ make
When build first time you get number of error messages on .qm files missing:
"RCC: Error in 'Resources/application.qrc': Cannot find file 'translations/gc_fr.qm'"
You can ignore these messages for your build. The .qm files will be created
during the build at a later point in time via the "lrelease" command you
configured in gcconfig.pri
Congratulations you have now build a basic GoldenCheetah and can run this safely. See below for
optional dependencies you can install to support other features.
Congratulations you have now build a basic GoldenCheetah and can run this
safely from src folder.
See below for optional dependencies you can install to support other features.
2. ADDING OPTIONAL DEPENDENCIES
===============================
ADDING OPTIONAL DEPENDENCIES WHEN BUILDING VERSION 2
====================================================
D2XX - For Powertap downloads via USB
-------------------------------------
Download the FTDI drivers from http://www.ftdichip.com/Drivers/D2XX.htm and
extract:
Download the FTDI drivers from http://www.ftdichip.com/Drivers/D2XX.htm (e.g. I used Linux
64-bit drivers from http://www.ftdichip.com/Drivers/D2XX/Linux/libftd2xx1.0.4.tar.gz)
$ cd ~/Projects
$ wget http://www.ftdichip.com/Drivers/D2XX/Linux/libftd2xx-x86_64-1.3.6.tgz
$ tar xf libftd2xx-x86_64-1.3.6.tgz
Extract into your home directory (I put mine into ~/Projects/ with archive manager which
created a sub-directory ~/Projects/libftd2xx1.0.4
$ cd ~/Projects/GoldenCheetah/src
$ cd src
$ vi gcconfig.pri
Uncomment the D2XX_INCLUDE entry and make it match (my home is /home/markl)
D2XX_INCLUDE = /home/markl/Projects/libftd2xx-x86_64-1.3.6
D2XX_INCLUDE = /home/markl/libftd2xx1.0.4
Make clean is needed if you have previouslt built, since source files examine
#defines before including this feature. You can skip it if you know why ;)
Make clean is needed if you have previouslt built, since source files examine #defines before
including this feature. You can skip it if you know why ;)
$ make clean
$ qmake
$ make
@@ -187,20 +186,20 @@ $ make
$ sudo make install
Lets go config GC and build with SRMIO
$ cd ~/Projects/GoldenCheetah/src
$ cd ~/Projects/Live/GoldenCheetah/src
$ vi gcconfig.pri
Uncomment the SRMIO_INSTALL and replace with the target used from srmio install:
SRMIO_INSTALL = /usr/local/
At the bottom of gcconfig.pri you will see the include directory should
reference from the base install location (/usr/local) make sure it says:
At the bottom of gcconfig.pri you will see the include directory should reference from
the base install location (/usr/local) make sure it says:
SRMIO_INCLUDE = $${SRMIO_INSTALL}/include
SRMIO_LIB = $${SRMIO_INSTALL}/lib/libsrmio.a
Make clean is needed if you have previouslt built, since source files examine
#defines before including this feature. You can skip it if you know why ;)
Make clean is needed if you have previouslt built, since source files examine #defines before
including this feature. You can skip it if you know why ;)
$ make clean
$ qmake
$ make
@@ -210,7 +209,9 @@ You now have SRM support built in.
LIBKML - For export to Google Earth
-----------------------------------
You will need Google Earth 5.2 or later and therefore libkml that supports this.
You will need Google Earth 5.2 or later and therefore libkml that supports this. Unfortunately at the time of writing
the officially packaged libkml is too old, so you will need to install from source, which means you will need to have
subversion installed and expat. You may be able to use the currently packaged libkml with
$ sudo apt-get install libkml-dev
@@ -219,8 +220,7 @@ if this does not work you will need to build from source:
$ sudo apt-get install subversion
$ sudo apt-get install expat libexpat1 libexpat1-dev
Once svn is installed you can grab the libkml source, configure, build and
install:
Once svn is installed you can grab the libkml source and configure build etc:
$ cd ~/Projects
$ svn checkout http://libkml.googlecode.com/svn/trunk/ libkml
$ cd libkml
@@ -237,32 +237,55 @@ if this does not work you will need to build from source:
- examples/{engine,gpx,gx,hellonet,helloworld,regionator,xsd}/Makefile
- and look for the flag -pedantic and remove it. I got this on Linux 64bit builds ymmv.
Once libkml is installed:
Once libkml is installed and built:
$ cd ~/Projects/GoldenCheetah/src
$ cd ~/Projects/Live/GoldenCheetah/src
$ vi gcconfig.pri
Ensure KML_INSTALL=/usr/local
Make clean is needed if you have previously built, since source files examine
#defines before including this feature. You can skip it if you know why ;)
Make clean is needed if you have previously built, since source files examine #defines before
including this feature. You can skip it if you know why ;)
$ make clean
$ qmake
$ make
You can now export rides to Google Earth kml format.
ADDING OPTIONAL DEPENDENCIES WHEN BUILDING VERSION 3
====================================================
NOTE: When you run version 3 it will refresh ride metrics and CP files -- this only occurs the
first time it runs (and will refresh only rides that change after that). I find it is best
to import ride files once your build where you want it. i.e. don't import until you have
got all your dependencies sorted.
NOTE: To reduce the dependencies on 'dormant' code there are a number of new pieces of source
that are included in the release3 tree. Notably; qtsoap from qt-solutions, since they
work but are likely to be archived and deprecated. If and when that happens we may well
adopt whatever classes Trolltech introduce.
LIBICAL - Diary integration with Google or MobileMe calendars
-------------------------------------------------------------
$ sudo apt-get install libical-dev
$ cd ~/Projects/Live/GoldenCheetah/src
$ cd ~/Projects/GoldenCheetah/src
$ sudo apt-get install libical-dev
$ vi gcconfig.pri
ICAL_INSTALL = /usr
ICAL_INCLUDE = /usr/include
ICAL_LIBS = -lical
ICAL_INSTALL=/usr/include
ICAL_LIBS=-lical
Since the src.pro wants ICAL installed in a different place we need to hack it, *** this will
be fixed shortly ***
$ vi src.pro
Comment out the ICAL_LIBS entry:
#ICAL_LIBS = $${ICAL_INSTALL}/lib/libical.a
$ make clean
$ qmake
@@ -270,89 +293,21 @@ $ make
You should now have diary functions.
LIBVLC - Video playback in Realtime
-----------------------------------
NOTE: That upload to MobileMe and Google requires a functioning https lib in QT. Depending
upon the version installed this might not be the case and will need to be built and
configured -- this is beyond the scope of this walkthough. Sorry.
You will need libvlc 3.0.8 or higher for better performance:
LIBVLC - Video playback in Realtime (Experimental)
--------------------------------------------------
sudo add-apt-repository ppa:jonathonf/vlc-3
sudo add-apt-repository ppa:jonathonf/ffmpeg-4
sudo apt-get update
sudo apt-get install vlc libvlc-dev libvlccore-dev
You will need libvlc 1.1.9 or higher (1.1.8 is ok but will segv on exit)
$ sudo apt-get install libvlc-dev
$ cd ~/Projects/GoldenCheetah/src
$ vi gcconfig.pri
Comment out VLC_INSTALL and it should read:
VLC_INSTALL = /usr
$ make clean
$ qmake
$ make
LIBUSB - for using USB2 sticks in Train View on Linux or Windows
----------------------------------------------------------------
$ sudo apt-get install libusb-1.0-0-dev libudev-dev
$ cd ~/Projects/GoldenCheetah/src
$ vi gcconfig.pri
Uncomment or add the following lines:
LIBUSB_USE_V_1 = true # don't use on Windows
LIBUSB_INSTALL = /usr/local
$ make clean
$ qmake
$ make
R Embedding
-----------
Install R 4.0
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys E298A3A825C0D65DFD57CBB651716619E084DAB9
sudo add-apt-repository "deb https://cloud.r-project.org/bin/linux/ubuntu bionic-cran40/"
sudo apt-get update
sudo apt-get install r-base-dev
R --version
$ cd ~/Projects/GoldenCheetah/src
$ vi gcconfig.pri
Uncomment or add the following line:
DEFINES += GC_WANT_R
$ make clean
$ qmake
$ make
Python Embedding
----------------
Install Python 3.7
sudo add-apt-repository ppa:deadsnakes/ppa
sudo apt-get update
sudo apt-get install python3.7-dev
python3.7 --version
Install SIP 4.19.8:
cd ~/Projects
wget https://sourceforge.net/projects/pyqt/files/sip/sip-4.19.8/sip-4.19.8.tar.gz
tar xf sip-4.19.8.tar.gz
cd sip-4.19.8
python3.7 configure.py
make
sudo make install
$ cd ~/Projects/GoldenCheetah/src
$ vi gcconfig.pri
Uncomment or add the following lines:
DEFINES += GC_WANT_PYTHON
PYTHONINCLUDES = -I/usr/include/python3.7/
PYTHONLIBS = -L/usr/lib/python3.7/config-3.7m-x86_64-linux-gnu -lpython3.7m
VLC_INSTALL = /usr/include/vlc/
$ make clean
$ qmake

View File

@@ -1,12 +1,3 @@
Update Note: to build GoldenCheetah v3.6 we are using the Homebrew Package
Manager to install dependencies, including Qt and GSL, on Travis-ci
macOS Mojave build environment.
You can check the travis/osx folder for the complete and updated build scripts,
the minimum Qt version known to work is 5.13 with Qt WebEngine and Qt Charts.
GSL - GNU Scientific Library is a mandatory dependency starting with v3.6
Ale Martinez - Jul, 2022
+++++++++++++++++++++++
MAC OSX BUILD WALKTHROUGH
+++++++++++++++++++++++

View File

@@ -1,11 +1,4 @@
Update Note: to build GoldenCheetah v3.6 we are using Microsoft Visual C++ 2019,
included in Microsoft Visual Studio 2019 AppVeyor image, with Qt5.15.2 and GSL 2.7
insalled with vcpkg, on AppVeyor continuous integration platform.
You can check the appveyor.yml for the complete and updated build script,
the minimum Qt version known to work is 5.13 with Qt WebEngine and Qt Charts.
GSL - GNU Scientific Library is a mandatory dependency starting with v3.6
Ale Martinez - Jul, 2022
+++++++++++++++++++++++
WIN32 BUILD WALKTHROUGH
+++++++++++++++++++++++
@@ -14,7 +7,6 @@ Ale Martinez - Jul, 2022
February 2017
This instruction will guide you through a standard build of GoldenCheetah (without external
dependencies or API based services included).
@@ -207,3 +199,9 @@ to contribute is to provide a pull-request.
Cheers.
Joern

View File

@@ -1,10 +1 @@
Issue tracker is **only** for Bugs and Features, please don't open issues for questions or technical support. Before to open a new issue please read the contributing guidelines (link below).
If you have questions, please read the FAQs and User's/Developer's Guide:
* FAQs - https://github.com/GoldenCheetah/GoldenCheetah/wiki/FAQ
* User's Guide - https://github.com/GoldenCheetah/GoldenCheetah/wiki/UG_Main-Page_Users-Guide
* Developer's Guide - https://github.com/GoldenCheetah/GoldenCheetah/wiki/Developers-guide
If you need help or technical support please use the forums:
* Users - https://groups.google.com/forum/#!forum/golden-cheetah-users
* Developers - https://groups.google.com/forum/#!forum/golden-cheetah-developers
Issue tracker is **only** for Bugs and Features, before to open a new issue please read the Contributing document (link at the right) and use the forums if you need help.

View File

@@ -4,11 +4,18 @@
## About
GoldenCheetah is a desktop application for cyclists and triathletes and coaches, providing a rich set of tools and models to analyse, track and predict performance, optimise aerodynamics and train indoors.
GoldenCheetah is an open-source data analysis tool primarily written in C++
with Qt for cyclists and triathletes
with support for training as well.
GoldenCheetah can connect with indoor trainers and cycling equipment such
as cycling computers and power meters to import data.
In addition, GoldenCheetah can connect to cloud services.
It can then manipulate and view the data, as well as analyze it.
GoldenCheetah integrates with most popular cloud services like Strava and Todays Plan, imports data from bike computers, imports downloads from any website like TrainingPeaks and Garmin and will also connect to smart trainers using ANT+ and Bluetooth.
GoldenCheetah is free for everyone to use and modify, released under the GPL v2 open source license with pre-built binaries for Mac, Windows and Linux.
## Installing
@@ -22,15 +29,12 @@ INSTALL-LINUX For building on Linux
INSTALL-MAC For building on Apple OS X
macOS and Linux: [![Build Status](https://app.travis-ci.com/GoldenCheetah/GoldenCheetah.svg?branch=master)](https://app.travis-ci.com/GoldenCheetah/GoldenCheetah)
OSX: [![Build Status](https://travis-ci.org/GoldenCheetah/GoldenCheetah.svg?branch=master)](https://travis-ci.org/GoldenCheetah/GoldenCheetah)
Windows: [![Build status](https://ci.appveyor.com/api/projects/status/i6dwn4m8oyu52ihi?svg=true)](https://ci.appveyor.com/project/Joern-R/goldencheetah-knhd8)
[![Coverity Status](https://scan.coverity.com/projects/7503/badge.svg)](https://scan.coverity.com/projects/goldencheetah-goldencheetah)
Official release builds, snapshots and development builds are all available from http://www.goldencheetah.org
## NOTIO Fork
If you are looking for the NOTIO fork of GoldenCheetah it can be found here: https://github.com/notio-technologies/GCNotio
Alternatively, official builds are available from http://www.goldencheetah.org
whilst the latest developer builds are available from https://github.com/GoldenCheetah/GoldenCheetah/releases

View File

@@ -1,230 +1,32 @@
version: ci.{build}
image: Visual Studio 2019
image: Visual Studio 2015
clone_depth: 1
environment:
GC_GOOGLE_CALENDAR_CLIENT_SECRET:
secure: hwjHTrSAMEbKd9PA+5x/zI4x5Uk4KQm1hdfZzkwiu8k=
GC_GOOGLE_DRIVE_CLIENT_ID:
secure: mNqG+pqpMl21ZFVvAMKvhm2rfOdv42fFpnLwfrvX5QqpWVcHEeBuUFeJeUAZfTR0GQGfWfPOEmhb9CG0W1ZJ05TIyb+kTLrWF7iijCiVR6s=
GC_GOOGLE_DRIVE_CLIENT_SECRET:
secure: T+BaB/L7x4dPPf592e0kfw4sTlAslUXl10irJqiUjpY=
GC_GOOGLE_DRIVE_API_KEY:
secure: oxTAhK/kiLUsXdYvITAgzSqeB5FRcL+XANFuAYpoW5P/xBb7XaLbNnL2gyrmzQeG
GC_CLOUD_OPENDATA_SECRET:
secure: 6fPhBiHKvJeOMqXdHGqpkPS+NpUDMczEXjedx8GcjbHr82ISX+gwSuXfOUDLq/S9
GC_WITHINGS_CONSUMER_SECRET:
secure: 86xAkdoQB8mLXq964/lGCp3ElTSF4k3a27R3UUXt3618guWLyBfsEK5Q0+XSOI3Q38w80CTpmNdwejoISv8Ilg==
GC_NOKIA_CLIENT_SECRET:
secure: pvPWraDplrKeRNamt5MKga8fzDmI2+zgFx+y3lsQE6gmBadZU2xkTIc/xCaP7UPv2erNCmKivfMOh2NIcRmqvIHynDoifNVy2P61KyG5v3E=
GC_DROPBOX_CLIENT_SECRET:
secure: 7pCVnVEKKmSU4SZN6IFqUw==
GC_STRAVA_CLIENT_SECRET:
secure: n3cMS1yy709xhSnTeWABMsoAIkJzy5euh3Pw4ehv0BzszJKoWkypF0cyW8RXSm3M
GC_TODAYSPLAN_CLIENT_SECRET:
secure: 7PnFB8cfahFT6LyP64eB7N1vkbwVaULpB2ORmEkn+J75zNB1xxGClFNXSHZ7kXhB
GC_CYCLINGANALYTICS_CLIENT_SECRET:
secure: UY+m3YypNNLUzKyGdrLw8xdCvxuQWRZi9EHS3j1ubLC4qyRL7iEVW6ubumfdh6gT
GC_CLOUD_DB_BASIC_AUTH:
secure: OEBetrOnXjsY7wN8hYqmMj6482oDORmAmCq8PI7mfnfiWE6Z4jB676JvgdNlP98q
GC_CLOUD_DB_APP_NAME:
secure: bpkyuw/BsJw0OrpuBqQwZ46CHbhkbmcjcMttVtfJoZU=
GC_POLARFLOW_CLIENT_SECRET:
secure: h2JdlC1i4QOmwpkz+Xxbrw==
GC_SPORTTRACKS_CLIENT_SECRET:
secure: n6a8nJgqMyg+VsVeoIIR8TFzxyDFVi2w/ggetQk5agY=
GC_RWGPS_API_KEY:
secure: uUtCyF5ByZ1VYJOztUngIA==
GC_NOLIO_CLIENT_ID:
secure: /OFVjEBwU7o3SItIQVf/YlJ8XErxneXIT2N0JyPMSXR1tCbdZVWixMHpqKNWoNk4
GC_NOLIO_SECRET:
secure: mmMksvVnfBiXufBDn2gAhQY53n0J9BokSCtDY51uU918QJ/LL4XOojtJp5tMFn8T7ugyDhNASpqZXiK55vxSD53vm+tjufpfzppKEeh93Babvc/VrndLB1X/RZCRUQTR6rka05fYl4e0eBzP1H091A==
GC_XERT_CLIENT_SECRET:
secure: /1rVLT8LyJCZ4xNJ5W+NtAcZ1rtKaUjW9SYm/T3gHoc=
init:
# Setup QT 5.15 - 64Bit
- set QTDIR=C:\Qt\5.15\msvc2019_64
# Setup QT 5.9 - 64Bit
- set QTDIR=C:\Qt\5.9\msvc2015_64
- set PATH=%QTDIR%\bin;%PATH%
- qmake --version
# Setup MSVC - VS 2019
- call c:\"Program Files (x86)"\"Microsoft Visual Studio"\2019\Community\VC\Auxiliary\Build\vcvarsall.bat amd64
# Setup MSVC - VS 2015
# Setup NSIS
- set PATH=%PATH%;C:\"Program Files (x86)"\NSIS
- CALL "C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\vcvarsall.bat" amd64
cache:
- gc-ci-libs.zip -> appveyor.yml
- jom_1_1_3.zip
- sip-4.19.8.zip
- C:\R
- C:\Python -> src\Python\requirements.txt
- c:\tools\vcpkg\installed\
- qwt
# Get the libraries
- ps: Start-FileDownload 'https://github.com/Joern-R/gc-ci-libs/releases/download/0.0.2/gc-ci-libs.zip' -Filename 'c:/gc-ci-libs.zip'
- 7z x c:/gc-ci-libs.zip -oC:\libs
install:
# Get the libraries
- if not exist gc-ci-libs.zip appveyor DownloadFile "https://github.com/GoldenCheetah/WindowsSDK/releases/download/v0.1.1/gc-ci-libs.zip"
- 7z x -y gc-ci-libs.zip -oC:\libs
# GSL
- vcpkg install gsl:x64-windows
# Get config
# choco install winflexbison
- copy qwt\qwtconfig.pri.in qwt\qwtconfig.pri
- copy c:\libs\gcconfig64-Release.appveyor.pri src\gcconfig.pri
# Get jom
- if not exist jom_1_1_3.zip appveyor DownloadFile "https://download.qt.io/official_releases/jom/jom_1_1_3.zip"
- 7z x -y jom_1_1_3.zip -oc:\jom\
- set PATH=%PATH%;c:\jom\;
# Get R and add to config
- ps: >-
if (-not (Test-Path 'C:\R')) {
# Lets use 4.1 until 4.2 issues are fixed
#$rurl = $(ConvertFrom-JSON $(Invoke-WebRequest https://rversions.r-pkg.org/r-release-win).Content).URL
$rurl = "https://cran.r-project.org/bin/windows/base/old/4.1.3/R-4.1.3-win.exe"
Start-FileDownload $rurl "R-win.exe"
Start-Process -FilePath .\R-win.exe -ArgumentList "/VERYSILENT /DIR=C:\R" -NoNewWindow -Wait
}
- set PATH=%PATH%;c:\R\bin\;
- R --version
- echo DEFINES+=GC_WANT_R >> src\gcconfig.pri
# Get Python embeddable and install packages
- ps: >-
if (-not (Test-Path 'C:\Python')) {
Start-FileDownload "https://www.python.org/ftp/python/3.7.9/python-3.7.9-embed-amd64.zip" Python.zip
7z x Python.zip -oC:\Python\
echo python37.zip . '' 'import site' | Out-File C:\Python\python37._pth -Encoding ascii
mkdir C:\Python\lib\site-packages
c:\python37-x64\python -m pip install --upgrade pip
c:\python37-x64\python -m pip install -r src\Python\requirements.txt -t C:\Python\lib\site-packages
}
# Get SIP and and install on Python
- c:\python37-x64\python --version
- if not exist sip-4.19.8.zip appveyor DownloadFile "https://sourceforge.net/projects/pyqt/files/sip/sip-4.19.8/sip-4.19.8.zip"
- 7z x sip-4.19.8.zip
- cd sip-4.19.8
- c:\python37-x64\python configure.py
- jom -j4
- nmake install
- cd ..
# Add Python (avoiding colision between GC Context.h and Python context.h)
- echo DEFINES+=GC_WANT_PYTHON >> src\gcconfig.pri
- echo PYTHONINCLUDES=-ICore -I\"c:\python37-x64\include\" >> src\gcconfig.pri
- echo PYTHONLIBS=-L\"c:\python37-x64\libs\" -lpython37 >> src\gcconfig.pri
# GSL
- echo GSL_INCLUDES=c:\tools\vcpkg\installed\x64-windows\include >> src\gcconfig.pri
- echo GSL_LIBS=-Lc:\tools\vcpkg\installed\x64-windows\lib -lgsl -lgslcblas >> src\gcconfig.pri
before_build:
# Define GC version string, only for tagged builds
- if %APPVEYOR_REPO_TAG%==true echo DEFINES+=GC_VERSION=VERSION_STRING >> src\gcconfig.pri
# Enable CloudDB
- echo CloudDB=active >> src\gcconfig.pri
# Add Train Robot
- echo DEFINES+=GC_WANT_ROBOT >> src\gcconfig.pri
# Avoid macro redefinition warnings
- echo DEFINES+=_MATH_DEFINES_DEFINED >> src\gcconfig.pri
# Add debug console
#- echo CONFIG+=console >> src\gcconfig.pri
# Patch Secrets.h
- ps: (Get-Content src\Core\Secrets.h) -replace '__GC_GOOGLE_CALENDAR_CLIENT_SECRET__', $env:GC_GOOGLE_CALENDAR_CLIENT_SECRET | Set-Content src\Core\Secrets.h
- ps: (Get-Content src\Core\Secrets.h) -replace '__GC_GOOGLE_DRIVE_CLIENT_ID__', $env:GC_GOOGLE_DRIVE_CLIENT_ID | Set-Content src\Core\Secrets.h
- ps: (Get-Content src\Core\Secrets.h) -replace '__GC_GOOGLE_DRIVE_CLIENT_SECRET__', $env:GC_GOOGLE_DRIVE_CLIENT_SECRET | Set-Content src\Core\Secrets.h
- ps: (Get-Content src\Core\Secrets.h) -replace '__GC_GOOGLE_DRIVE_API_KEY__', $env:GC_GOOGLE_DRIVE_API_KEY | Set-Content src\Core\Secrets.h
- ps: (Get-Content src\Core\Secrets.h) -replace 'OPENDATA_DISABLE', 'OPENDATA_ENABLE' | Set-Content src\Core\Secrets.h
- ps: (Get-Content src\Core\Secrets.h) -replace '__GC_CLOUD_OPENDATA_SECRET__', $env:GC_CLOUD_OPENDATA_SECRET | Set-Content src\Core\Secrets.h
- ps: (Get-Content src\Core\Secrets.h) -replace '__GC_WITHINGS_CONSUMER_SECRET__', $env:GC_WITHINGS_CONSUMER_SECRET | Set-Content src\Core\Secrets.h
- ps: (Get-Content src\Core\Secrets.h) -replace '__GC_NOKIA_CLIENT_SECRET__', $env:GC_NOKIA_CLIENT_SECRET | Set-Content src\Core\Secrets.h
- ps: (Get-Content src\Core\Secrets.h) -replace '__GC_DROPBOX_CLIENT_SECRET__', $env:GC_DROPBOX_CLIENT_SECRET | Set-Content src\Core\Secrets.h
- ps: (Get-Content src\Core\Secrets.h) -replace '__GC_STRAVA_CLIENT_SECRET__', $env:GC_STRAVA_CLIENT_SECRET | Set-Content src\Core\Secrets.h
- ps: (Get-Content src\Core\Secrets.h) -replace '__GC_TODAYSPLAN_CLIENT_SECRET__', $env:GC_TODAYSPLAN_CLIENT_SECRET | Set-Content src\Core\Secrets.h
- ps: (Get-Content src\Core\Secrets.h) -replace '__GC_CYCLINGANALYTICS_CLIENT_SECRET__', $env:GC_CYCLINGANALYTICS_CLIENT_SECRET | Set-Content src\Core\Secrets.h
- ps: (Get-Content src\Core\Secrets.h) -replace '__GC_CLOUD_DB_BASIC_AUTH__', $env:GC_CLOUD_DB_BASIC_AUTH | Set-Content src\Core\Secrets.h
- ps: (Get-Content src\Core\Secrets.h) -replace '__GC_CLOUD_DB_APP_NAME__', $env:GC_CLOUD_DB_APP_NAME | Set-Content src\Core\Secrets.h
- ps: (Get-Content src\Core\Secrets.h) -replace '__GC_POLARFLOW_CLIENT_SECRET__', $env:GC_POLARFLOW_CLIENT_SECRET | Set-Content src\Core\Secrets.h
- ps: (Get-Content src\Core\Secrets.h) -replace '__GC_SPORTTRACKS_CLIENT_SECRET__', $env:GC_SPORTTRACKS_CLIENT_SECRET | Set-Content src\Core\Secrets.h
- ps: (Get-Content src\Core\Secrets.h) -replace '__GC_RWGPS_API_KEY__', $env:GC_RWGPS_API_KEY | Set-Content src\Core\Secrets.h
- ps: (Get-Content src\Core\Secrets.h) -replace '__GC_NOLIO_CLIENT_ID__', $env:GC_NOLIO_CLIENT_ID | Set-Content src\Core\Secrets.h
- ps: (Get-Content src\Core\Secrets.h) -replace '__GC_NOLIO_SECRET__', $env:GC_NOLIO_SECRET | Set-Content src\Core\Secrets.h
- ps: (Get-Content src\Core\Secrets.h) -replace '__GC_XERT_CLIENT_SECRET__', $env:GC_XERT_CLIENT_SECRET | Set-Content src\Core\Secrets.h
build_script:
- qmake.exe build.pro -r -spec win32-msvc
- cd qwt\
- jom -j1
- cd ..
- jom -j4
- nmake
after_build:
- cd src\release
# copy dependencies
- windeployqt --release GoldenCheetah.exe
- copy c:\libs\10_Precompiled_DLL\usbexpress_3.5.1\USBXpress\USBXpress_API\Host\x64\SiUSBXp.dll
- copy c:\libs\10_Precompiled_DLL\libsamplerate64\lib\libsamplerate-0.dll
- copy c:\libs\10_Precompiled_DLL\VLC\win64\lib\libvlc*.dll
- xcopy /s /i /e /q c:\libs\10_Precompiled_DLL\VLC\win64\plugins plugins
- copy c:\OpenSSL-v111-Win64\bin\lib*.dll
- copy c:\OpenSSL-v111-Win64\license.txt "OpenSSL License.txt"
- xcopy /s /i /e /q C:\Python .
- copy C:\Python\LICENSE.txt "PYTHON LICENSE.txt"
- copy c:\tools\vcpkg\installed\x64-windows\bin\gsl*.dll
# ReadMe, license and icon files
- copy ..\Resources\win32\ReadMe.txt
- echo GoldenCheetah is licensed under the GNU General Public License v2 > license.txt
- echo. >> license.txt
- type ..\..\COPYING >> license.txt
- copy ..\Resources\win32\gc.ico
# Installer script
- copy ..\Resources\win32\GC3.6-Dev-Master-W64-QT5.nsi
# Build the installer
- makensis GC3.6-Dev-Master-W64-QT5.nsi
- move GoldenCheetah_v3.6-DEV_64bit_Windows.exe ..\..\GoldenCheetah_v3.6-DEV_x64.exe
- cd ..\..
- ps: Set-AppveyorBuildVariable -Name 'PUBLISH_BINARIES' -Value false
- ps: if ($env:APPVEYOR_REPO_COMMIT_MESSAGE_EXTENDED -Match "\[publish binaries\]") { Set-AppveyorBuildVariable -Name 'PUBLISH_BINARIES' -Value true }
test_script:
# minimum test
- src\release\GoldenCheetah --version 2>GCversionWindows.txt
- git log -1 >> GCversionWindows.txt
- ps: CertUtil -hashfile GoldenCheetah_v3.6-DEV_x64.exe sha256 | Select-Object -First 2 | Add-Content GCversionWindows.txt
- type GCversionWindows.txt
artifacts:
- path: GoldenCheetah_v3.6-DEV_x64.exe
name: GCinstaller
- path: GCversionWindows.txt
name: GCversionWindows
deploy:
# deploy continuous builds to s3
- provider: S3
access_key_id:
secure: RoEkfKfOnF7JHOiLZX5qfZM08X+bu4oaDzzSKgdooNM=
secret_access_key:
secure: GPAArawatirYwgpHJBthdrbvyFU5bBzPOdK7VYYPiG2YHYi/DNJZ5Q5qGK1A440p
bucket: goldencheetah-binaries
region: us-east-1
remove_files: true
set_public: true
folder: Windows
artifact: GCinstaller, GCversionWindows
on:
PUBLISH_BINARIES: true
APPVEYOR_REPO_NAME: "GoldenCheetah/GoldenCheetah"
#notifications:
#- provider: GitHubPullRequest
# on_build_success: true
# on_build_failure: true
# on_build_status_changed: true

File diff suppressed because it is too large Load Diff

View File

@@ -1,19 +0,0 @@
NOTE
The sources in this sub-directory including the LICENSE and README files
are imported with permission granted by the author (Greg Hamerly).
The original sources are available from Github here:
https://github.com/ghamerly/fast-kmeans
Whilst the original source implements multiple algorithms we only kept
the hamerly variant.
The source files have not been adapted to use Qt containers or to refactor
any of the class inheritance. So updating to newer versions should be
very straightforward (although the base functionality and performance is
good enough).
Usage in Qt applications will likely use the Kmeans wrapper class
28/09/2021

View File

@@ -1,22 +0,0 @@
The MIT License (MIT)
Copyright (c) 2014 Greg Hamerly
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@@ -1,15 +0,0 @@
# doesn't work now since the source has been placed in a subdir
# left for info and fairly trivial to fixup if you want to work
# directly with the original sources
OBJECTS=kmeans_dataset.o \
general_functions.o \
hamerly_kmeans.o \
kmeans.o \
original_space_kmeans.o \
triangle_inequality_base_kmeans.o \
driver-standalone.o
driver-standalone: $(OBJECTS)
gcc -o $@ $(OBJECTS) -lstdc++ -lm
./driver-standalone hamerly smallDataset.txt 4 centers

View File

@@ -1,106 +0,0 @@
===============================
Fast K-means Clustering Toolkit
===============================
----------------------
Version 0.1 (Sat May 17 17:41:11 CDT 2014)
- Initial release.
----------------------
WHAT:
This software is a testbed for comparing variants of Lloyd's k-means clustering
algorithm. It includes implementations of several algorithms that accelerate
the algorithm by avoiding unnecessary distance calculations.
----------------------
WHO:
Greg Hamerly (hamerly@cs.baylor.edu, primary contact) and Jonathan Drake
(drakej@hp.com).
----------------------
HOW TO BUILD THE SOFTWARE:
type "make" (and hope for the best)
----------------------
HOW TO RUN THE SOFTWARE:
The driver is designed to take commands from standard input, usually a file
that's been redirected as input:
./kmeans < commands.txt
You can read the source to find all the possible commands, but here is a
summary:
- threads T -- use T threads for clustering
- maxiterations I -- use at most I iterations; default (or negative)
indicates an unlimited number
- dataset D -- use the given path name to a file as the dataset for
clustering. The dataset should have a first line with the number of points
n and dimension d. The next (nd) tokens are taken as the n vectors
to cluster.
- initialize k {kpp|random} -- use the given method (k-means++ or a random
sample of the points) to initialize k centers
- lloyd, hamerly, annulus, elkan, compare, sort, heap, adaptive -- perform
k-means clustering with the given algorithm (requires first having
initialized the centers). The adaptive algorithm is Drake's algorithm with
a heuristic for choosing an initial B
- drake B -- use Drake's algorithm with B lower bounds
- kernel [gaussian T | linear | polynomial P] -- use kernelized k-means with
the given kernel
- elkan_kernel [gaussian T | linear | polynomial P] -- use kernelized
k-means with the given kernel, and Elkan's accelerations
- center -- give the previously-loaded dataset a mean of 0.
- quit -- quit the program
Note that when a set of centers is initialized, that same set of centers is used
from then on (until a new initialization occurs). So running a clustering
algorithm multiple times will use the same initialization each time.
Here is an example of a simple set of commands:
dataset smallDataset.txt
initialize 10 kpp
annulus
hamerly
adaptive
heap
elkan
sort
compare
----------------------
CAVEATS:
- This software has been developed and tested on Linux. Other platforms may not
work. Please let us know if you have difficulties, and if possible fixes for
the code.
- This software uses a non-standard pthreads function called
pthread_barrier_wait(), which is implemented on Linux but not on OSX.
Therefore, multithreading doesn't currently work on OSX. To turn it off,
comment out the lines in the Makefile that say:
CPPFLAGS += -DUSE_THREADS
LDFLAGS += -lpthread
----------------------
REFERENCES:
Phillips, Steven J. "Acceleration of k-means and related clustering algorithms."
In Algorithm Engineering and Experiments, pp. 166-177. Springer Berlin
Heidelberg, 2002.
Elkan, Charles. "Using the triangle inequality to accelerate k-means." In ICML,
vol. 3, pp. 147-153. 2003.
Hamerly, Greg. "Making k-means Even Faster." In SDM, pp. 130-140. 2010.
Drake, Jonathan, and Greg Hamerly. "Accelerated k-means with adaptive distance
bounds." In 5th NIPS Workshop on Optimization for Machine Learning. 2012.
Drake, Jonathan. "Faster k-means clustering." MS thesis, 2013.
Hamerly, Greg, and Jonathan Drake. "Accelerating Lloyd's algorithm for k-means
clustering." To appear in Partitional Clustering Algorithms, Springer, 2014.

View File

@@ -1,70 +0,0 @@
#include <iostream>
#include <fstream>
#include <string>
#include <cassert>
#include "kmeans/general_functions.h"
#include "kmeans/kmeans.h"
#include "dataset.h"
#include "hamerly_kmeans.h"
Dataset *load_dataset(std::string const &filename) {
std::ifstream input(filename.c_str());
int n, d;
input >> n >> d;
Dataset *x = new Dataset(n, d);
for (int i = 0; i < n * d; ++i) input >> x->data[i];
return x;
}
Kmeans *get_algorithm(std::string const &name) {
if (name == "hamerly") return new HamerlyKmeans();
return NULL;
}
int main(int argc, char **argv) {
if (argc != 5) {
std::cout << "usage: " << argv[0] << " algorithm dataset k [centers|assignment]\n";
return 1;
}
std::string algorithm_name(argv[1]);
std::string filename(argv[2]);
int k = std::stoi(argv[3]);
std::string output(argv[4]);
Dataset *x = load_dataset(filename);
Kmeans *algorithm = get_algorithm(algorithm_name);
Dataset *initialCenters = init_centers_kmeanspp_v2(*x, k);
unsigned short *assignment = new unsigned short[x->n];
assign(*x, *initialCenters, assignment);
algorithm->initialize(x, k, assignment, 1);
algorithm->run(10000);
Dataset const *finalCenters = algorithm->getCenters();
if (output == "centers") {
finalCenters->print();
} else {
assign(*x, *finalCenters, assignment);
for (int i = 0; i < x->n; ++i) {
std::cout << assignment[i] << "\n";
}
}
delete x;
delete algorithm;
delete initialCenters;
delete [] assignment;
return 0;
}

View File

@@ -1,179 +0,0 @@
/* Authors: Greg Hamerly and Jonathan Drake
* Feedback: hamerly@cs.baylor.edu
* See: http://cs.baylor.edu/~hamerly/software/kmeans.php
* Copyright 2014
*/
#include "hamerly_kmeans.h"
#include "kmeans_general_functions.h"
#include <cmath>
#include <algorithm>
/* Hamerly's algorithm that is a 'simplification' of Elkan's, in that it keeps
* the following bounds:
* - One upper bound per clustered record on the distance between the record
* and its closest center. It is always greater than or equal to the true
* distance between the record and its closest center. This is the same as in
* Elkan's algorithm.
* - *One* lower bound per clustered record on the distance between the record
* and its *second*-closest center. It is always less than or equal to the
* true distance between the record and its second closest center. This is
* different information than Elkan's algorithm -- his algorithm keeps k
* lower bounds for each record, for a total of (n*k) lower bounds.
*
* The basic ideas are:
* - when lower(x) <= upper(x), we need to recalculate the closest centers for
* the record x, and reset lower(x) and upper(x) to their boundary values
* - whenever a center moves
* - calculate the distance it moves 'd'
* - for each record x assigned to that center, update its upper bound
* - upper(x) = upper(x) + d
* - after each iteration
* - find the center that has moved the most (with distance 'd')
* - update the lower bound for all (?) records:
* - lower(x) = lower(x) - d
*
* Parameters:
* - threadId: the index of the thread that is running
* - maxIterations: a bound on the number of iterations to perform
*
* Return value: the number of iterations performed (always at least 1)
*/
// this version only updates center locations when necessary
int HamerlyKmeans::runThread(int threadId, int maxIterations) {
int iterations = 0;
int startNdx = start(threadId);
int endNdx = end(threadId);
while ((iterations < maxIterations) && ! converged) {
++iterations;
// compute the inter-center distances, keeping only the closest distances
update_s(threadId);
synchronizeAllThreads();
// loop over all records
for (int i = startNdx; i < endNdx; ++i) {
unsigned short closest = assignment[i];
// if upper[i] is less than the greater of these two, then we can
// ignore record i
double upper_comparison_bound = std::max(s[closest], lower[i]);
// first check: if u(x) <= s(c(x)) or u(x) <= lower(x), then ignore
// x, because its closest center must still be closest
if (upper[i] <= upper_comparison_bound) {
continue;
}
// otherwise, compute the real distance between this record and its
// closest center, and update upper
double u2 = pointCenterDist2(i, closest);
upper[i] = sqrt(u2);
// if (u(x) <= s(c(x))) or (u(x) <= lower(x)), then ignore x
if (upper[i] <= upper_comparison_bound) {
continue;
}
// now update the lower bound by looking at all other centers
double l2 = std::numeric_limits<double>::max(); // the squared lower bound
for (int j = 0; j < k; ++j) {
if (j == closest) { continue; }
double dist2 = pointCenterDist2(i, j);
if (dist2 < u2) {
// another center is closer than the current assignment
// change the lower bound to be the current upper bound
// (since the current upper bound is the distance to the
// now-second-closest known center)
l2 = u2;
// adjust the upper bound and the current assignment
u2 = dist2;
closest = j;
} else if (dist2 < l2) {
// we must reduce the lower bound on the distance to the
// *second* closest center to x[i]
l2 = dist2;
}
}
// we have been dealing in squared distances; need to convert
lower[i] = sqrt(l2);
// if the assignment for i has changed, then adjust the counts and
// locations of each center's accumulated mass
if (assignment[i] != closest) {
upper[i] = sqrt(u2);
changeAssignment(i, closest, threadId);
}
}
verifyAssignment(iterations, startNdx, endNdx);
// ELKAN 4, 5, AND 6
// calculate the new center locations
synchronizeAllThreads();
if (threadId == 0) {
int furthestMovingCenter = move_centers();
converged = (0.0 == centerMovement[furthestMovingCenter]);
}
synchronizeAllThreads();
if (! converged) {
update_bounds(startNdx, endNdx);
}
synchronizeAllThreads();
}
return iterations;
}
/* This method does the following:
* - finds the furthest-moving center
* - finds the distances moved by the two furthest-moving centers
* - updates the upper/lower bounds for each record
*
* Parameters:
* - startNdx: the first index of the dataset this thread is responsible for
* - endNdx: one past the last index of the dataset this thread is responsible for
*/
void HamerlyKmeans::update_bounds(int startNdx, int endNdx) {
double longest = centerMovement[0], secondLongest = (1 < k) ? centerMovement[1] : centerMovement[0];
int furthestMovingCenter = 0;
if (longest < secondLongest) {
furthestMovingCenter = 1;
std::swap(longest, secondLongest);
}
for (int j = 2; j < k; ++j) {
if (longest < centerMovement[j]) {
secondLongest = longest;
longest = centerMovement[j];
furthestMovingCenter = j;
} else if (secondLongest < centerMovement[j]) {
secondLongest = centerMovement[j];
}
}
// update upper/lower bounds
for (int i = startNdx; i < endNdx; ++i) {
// the upper bound increases by the amount that its center moved
upper[i] += centerMovement[assignment[i]];
// The lower bound decreases by the maximum amount that any center
// moved, unless the furthest-moving center is the one it's assigned
// to. In the latter case, the lower bound decreases by the amount
// of the second-furthest-moving center.
lower[i] -= (assignment[i] == furthestMovingCenter) ? secondLongest : longest;
}
}

View File

@@ -1,29 +0,0 @@
#ifndef HAMERLY_KMEANS_H
#define HAMERLY_KMEANS_H
/* Authors: Greg Hamerly and Jonathan Drake
* Feedback: hamerly@cs.baylor.edu
* See: http://cs.baylor.edu/~hamerly/software/kmeans.php
* Copyright 2014
*
* HamerlyKmeans implements Hamerly's k-means algorithm that uses one lower
* bound per point.
*/
#include "triangle_inequality_base_kmeans.h"
class HamerlyKmeans : public TriangleInequalityBaseKmeans {
public:
HamerlyKmeans() { numLowerBounds = 1; }
virtual ~HamerlyKmeans() { free(); }
virtual std::string getName() const { return "hamerly"; }
protected:
// Update the upper and lower bounds for the given range of points.
void update_bounds(int startNdx, int endNdx);
virtual int runThread(int threadId, int maxIterations);
};
#endif

View File

@@ -1,165 +0,0 @@
/* Authors: Greg Hamerly and Jonathan Drake
* Feedback: hamerly@cs.baylor.edu
* See: http://cs.baylor.edu/~hamerly/software/kmeans.php
* Copyright 2014
*/
#include "kmeans.h"
#include "kmeans_general_functions.h"
#include <cassert>
#include <cmath>
Kmeans::Kmeans() : x(NULL), n(0), k(0), d(0), numThreads(0), converged(false),
clusterSize(NULL), centerMovement(NULL), assignment(NULL) {
#ifdef COUNT_DISTANCES
numDistances = 0;
#endif
}
void Kmeans::free() {
delete [] centerMovement;
for (int t = 0; t < numThreads; ++t) {
delete [] clusterSize[t];
}
delete [] clusterSize;
centerMovement = NULL;
clusterSize = NULL;
assignment = NULL;
n = k = d = numThreads = 0;
}
void Kmeans::initialize(Dataset const *aX, unsigned short aK, unsigned short *initialAssignment, int aNumThreads) {
free();
converged = false;
x = aX;
n = x->n;
d = x->d;
k = aK;
#ifdef USE_THREADS
numThreads = aNumThreads;
pthread_barrier_init(&barrier, NULL, numThreads);
#else
numThreads = 1;
#endif
assignment = initialAssignment;
centerMovement = new double[k];
clusterSize = new int *[numThreads];
for (int t = 0; t < numThreads; ++t) {
clusterSize[t] = new int[k];
std::fill(clusterSize[t], clusterSize[t] + k, 0);
for (int i = start(t); i < end(t); ++i) {
assert(assignment[i] < k);
++clusterSize[t][assignment[i]];
}
}
#ifdef COUNT_DISTANCES
numDistances = 0;
#endif
}
void Kmeans::changeAssignment(int xIndex, int closestCluster, int threadId) {
--clusterSize[threadId][assignment[xIndex]];
++clusterSize[threadId][closestCluster];
assignment[xIndex] = closestCluster;
}
#ifdef USE_THREADS
struct ThreadInfo {
public:
ThreadInfo() : km(NULL), threadId(0), pthread_id(0) {}
Kmeans *km;
int threadId;
pthread_t pthread_id;
int numIterations;
int maxIterations;
};
#endif
void *Kmeans::runner(void *args) {
#ifdef USE_THREADS
ThreadInfo *ti = (ThreadInfo *)args;
ti->numIterations = ti->km->runThread(ti->threadId, ti->maxIterations);
#endif
return NULL;
}
int Kmeans::run(int maxIterations) {
int iterations = 0;
#ifdef USE_THREADS
{
ThreadInfo *info = new ThreadInfo[numThreads];
for (int t = 0; t < numThreads; ++t) {
info[t].km = this;
info[t].threadId = t;
info[t].maxIterations = maxIterations;
pthread_create(&info[t].pthread_id, NULL, Kmeans::runner, &info[t]);
}
// wait for everything to finish...
for (int t = 0; t < numThreads; ++t) {
pthread_join(info[t].pthread_id, NULL);
}
iterations = info[0].numIterations;
delete [] info;
}
#else
{
iterations = runThread(0, maxIterations);
}
#endif
return iterations;
}
double Kmeans::getSSE() const {
double sse = 0.0;
for (int i = 0; i < n; ++i) {
sse += pointCenterDist2(i, assignment[i]);
}
return sse;
}
void Kmeans::verifyAssignment(int iteration, int startNdx, int endNdx) const {
#ifdef VERIFY_ASSIGNMENTS
for (int i = startNdx; i < endNdx; ++i) {
// keep track of the squared distance and identity of the closest-seen
// cluster (so far)
int closest = assignment[i];
double closest_dist2 = pointCenterDist2(i, closest);
double original_closest_dist2 = closest_dist2;
// look at all centers
for (int j = 0; j < k; ++j) {
if (j == closest) {
continue;
}
double d2 = pointCenterDist2(i, j);
// determine if we found a closer center
if (d2 < closest_dist2) {
closest = j;
closest_dist2 = d2;
}
}
// if we have found a discrepancy, then print out information and crash
// the program
if (closest != assignment[i]) {
std::cerr << "assignment error:" << std::endl;
std::cerr << "iteration = " << iteration << std::endl;
std::cerr << "point index = " << i << std::endl;
std::cerr << "closest center = " << closest << std::endl;
std::cerr << "closest center dist2 = " << closest_dist2 << std::endl;
std::cerr << "assigned center = " << assignment[i] << std::endl;
std::cerr << "assigned center dist2 = " << original_closest_dist2 << std::endl;
assert(false);
}
}
#endif
}

View File

@@ -1,147 +0,0 @@
#ifndef KMEANS_H
#define KMEANS_H
/* Authors: Greg Hamerly and Jonathan Drake
* Feedback: hamerly@cs.baylor.edu
* See: http://cs.baylor.edu/~hamerly/software/kmeans.php
* Copyright 2014
*
* Kmeans is an abstract base class for algorithms which implement Lloyd's
* k-means algorithm. Subclasses provide functionality in the "runThread()"
* method.
*/
#include "kmeans_dataset.h"
#include <limits>
#include <string>
#ifdef USE_THREADS
#include <pthread.h>
#endif
class Kmeans {
public:
// Construct a K-means object to operate on the given dataset
Kmeans();
virtual ~Kmeans() { free(); }
// This method kicks off the threads that do the clustering and run
// until convergence (or until reaching maxIterations). It returns the
// number of iterations performed.
int run(int aMaxIterations = std::numeric_limits<int>::max());
// Get the cluster assignment for the given point index.
int getAssignment(int xIndex) const { return assignment[xIndex]; }
// Initialize the algorithm at the beginning of the run(), with the
// given data and initial assignment. The parameter initialAssignment
// will be modified by this algorithm and will at the end contain the
// final assignment of clusters.
virtual void initialize(Dataset const *aX, unsigned short aK, unsigned short *initialAssignment, int aNumThreads);
// Free all memory being used by the object.
virtual void free();
// This method verifies that the current assignment is correct, by
// checking every point-center distance. For debugging.
virtual void verifyAssignment(int iteration, int startNdx, int endNdx) const;
// Compute the sum of squared errors for the data on the centers (not
// designed to be fast).
virtual double getSSE() const;
// Get the name of this clustering algorithm (to be overridden by
// subclasses).
virtual std::string getName() const = 0;
// Virtual methods for computing inner products (depending on the kernel
// being used, e.g.). For vanilla k-means these will be the standard dot
// product; for more exotic applications these will be other kernel
// functions.
virtual double pointPointInnerProduct(int x1, int x2) const = 0;
virtual double pointCenterInnerProduct(int xndx, unsigned short cndx) const = 0;
virtual double centerCenterInnerProduct(unsigned short c1, unsigned short c2) const = 0;
// Use the inner products to compute squared distances between a point
// and center.
virtual double pointCenterDist2(int x1, unsigned short cndx) const {
#ifdef COUNT_DISTANCES
++numDistances;
#endif
return pointPointInnerProduct(x1, x1) - 2 * pointCenterInnerProduct(x1, cndx) + centerCenterInnerProduct(cndx, cndx);
}
// Use the inner products to compute squared distances between two
// centers.
virtual double centerCenterDist2(unsigned short c1, unsigned short c2) const {
#ifdef COUNT_DISTANCES
++numDistances;
#endif
return centerCenterInnerProduct(c1, c1) - 2 * centerCenterInnerProduct(c1, c2) + centerCenterInnerProduct(c2, c2);
}
#ifdef COUNT_DISTANCES
#ifdef USE_THREADS
// Note: numDistances is NOT thread-safe, but it is not meant to be
// enabled in performant code.
#error Counting distances and using multiple threads is not supported.
#endif
mutable long long numDistances;
#endif
virtual Dataset const *getCenters() const { return NULL; }
protected:
// The dataset to cluster.
const Dataset *x;
// Local copies for convenience.
int n, k, d;
// Pthread primitives for multithreading.
int numThreads;
#ifdef USE_THREADS
pthread_barrier_t barrier;
#endif
// To communicate (to all threads) that we have converged.
bool converged;
// Keep track of how many points are in each cluster, divided over each
// thread.
int **clusterSize;
// centerMovement is computed in move_centers() and used to detect
// convergence (if max(centerMovement) == 0.0) and update point-center
// distance bounds (in subclasses that use them).
double *centerMovement;
// For each point in x, keep which cluster it is assigned to. By using a
// short, we assume a limited number of clusters (fewer than 2^16).
unsigned short *assignment;
// This is where each thread does its work.
virtual int runThread(int threadId, int maxIterations) = 0;
// Static entry method for pthread_create().
static void *runner(void *args);
// Assign point at xIndex to cluster newCluster, working within thread threadId.
virtual void changeAssignment(int xIndex, int newCluster, int threadId);
// Over what range in [0, n) does this thread have ownership of the
// points? end() returns one past the last owned point.
int start(int threadId) const { return n * threadId / numThreads; }
int end(int threadId) const { return start(threadId + 1); }
int whichThread(int index) const { return index * numThreads / n; }
// Convenience method for causing all threads to synchronize.
void synchronizeAllThreads() {
#ifdef USE_THREADS
pthread_barrier_wait(&barrier);
#endif
}
};
#endif

View File

@@ -1,95 +0,0 @@
/* Authors: Greg Hamerly and Jonathan Drake
* Feedback: hamerly@cs.baylor.edu
* See: http://cs.baylor.edu/~hamerly/software/kmeans.php
* Copyright 2014
*/
#include "kmeans_dataset.h"
// #include <iostream>
#include <iomanip>
#include <cassert>
#include <cstring>
// print the dataset to standard output (cout), using formatting to keep the
// data in matrix format
void Dataset::print(std::ostream &out) const {
//std::ostream &out = std::cout;
out.precision(6);
int ndx = 0;
for (int i = 0; i < n; ++i) {
for (int j = 0; j < d; ++j) {
out << std::setw(13) << data[ndx++] << " ";
}
out << std::endl;
}
}
// returns a (modifiable) reference to the value in dimension "dim" from record
// "ndx"
double &Dataset::operator()(int ndx, int dim) {
# ifdef DEBUG
assert(ndx < n);
assert(dim < d);
# endif
return data[ndx * d + dim];
}
// returns a (const) reference to the value in dimension "dim" from record "ndx"
const double &Dataset::operator()(int ndx, int dim) const {
# ifdef DEBUG
assert(ndx < n);
assert(dim < d);
# endif
return data[ndx * d + dim];
}
// fill the entire dataset with value. Does NOT update sumDataSquared.
void Dataset::fill(double value) {
for (int i = 0; i < nd; ++i) {
data[i] = value;
}
}
// copy constructor -- makes a deep copy of everything in x
Dataset::Dataset(Dataset const &x) {
n = d = nd = 0;
data = sumDataSquared = NULL;
*this = x;
}
// operator= is the standard deep-copy assignment operator, which
// returns a const reference to *this.
Dataset const &Dataset::operator=(Dataset const &x) {
if (this != &x) {
// reallocate sumDataSquared and data as necessary
if (n != x.n) {
delete [] sumDataSquared;
sumDataSquared = x.sumDataSquared ? new double[x.n] : NULL;
}
if (nd != x.nd) {
delete [] data;
data = x.data ? new double[x.nd] : NULL;
}
// reflect the new sizes
n = x.n;
d = x.d;
nd = x.nd;
// copy data as appropriate
if (x.sumDataSquared) {
memcpy(sumDataSquared, x.sumDataSquared, x.n * sizeof(double));
}
if (x.data) {
memcpy(data, x.data, x.nd * sizeof(double));
}
}
// return a reference for chaining assignments
return *this;
}

View File

@@ -1,85 +0,0 @@
#ifndef DATASET_H
#define DATASET_H
/* Authors: Greg Hamerly and Jonathan Drake
* Feedback: hamerly@cs.baylor.edu
* See: http://cs.baylor.edu/~hamerly/software/kmeans.php
* Copyright 2014
*
* A Dataset class represents a collection of multidimensional records, as is
* typical in metric machine learning. Every record has the same number of
* dimensions (values), and every value must be numeric. Undefined values are
* not allowed.
*
* This particular implementation keeps all the data in a 1-dimensional array,
* and also optionally keeps extra storage for the sum of the squared values of
* each record. However, the Dataset class does NOT automatically populate or
* update the sumDataSquared values.
*/
#include <cstddef>
#include <iostream>
class Dataset {
public:
// default constructor -- constructs a completely empty dataset with no
// records
Dataset() : n(0), d(0), nd(0), data(NULL), sumDataSquared(NULL) {}
// construct a dataset of a particular size, and determine whether to
// keep the sumDataSquared
Dataset(int aN, int aD, bool keepSDS = false) : n(aN), d(aD), nd(n * d),
data(new double[nd]),
sumDataSquared(keepSDS ? new double[n] : NULL) {}
// copy constructor -- makes a deep copy of everything in x
Dataset(Dataset const &x);
// destroys the dataset safely
~Dataset() {
n = d = nd = 0;
double *dp = data, *sdsp = sumDataSquared;
data = sumDataSquared = NULL;
delete [] dp;
delete [] sdsp;
}
// operator= is the standard deep-copy assignment operator, which
// returns a const reference to *this.
Dataset const &operator=(Dataset const &x);
// allows modification of the record ndx and dimension dim
double &operator()(int ndx, int dim);
// allows const access to record ndx and dimension dim
const double &operator()(int ndx, int dim) const;
// fill the entire dataset with value. Does NOT update sumDataSquared.
void fill(double value);
// print the dataset to standard output (cout), using formatting to keep the
// data in matrix format
void print(std::ostream &out = std::cout) const;
// n represents the number of records
// d represents the dimension
// nd is a shortcut for the value n * d
int n, d, nd;
// data is an array of length n*d that stores all of the records in
// record-major (row-major) order. Thus data[0]...data[d-1] are the
// values associated with the first record.
double *data;
// sumDataSquared is an (optional) sum of squared values for every
// record. Thus,
// sumDataSquared[0] = data[0]^2 + data[1]^2 + ... + data[d-1]^2
// sumDataSquared[1] = data[d]^2 + data[d+1]^2 + ... + data[2*d-1]^2
// and so on. Note that this is the *intended* use of the sumDataSquared
// field, but that the Dataset class does NOT automatically populate or
// update the values in sumDataSquared.
double *sumDataSquared;
};
#endif

View File

@@ -1,256 +0,0 @@
/* Authors: Greg Hamerly and Jonathan Drake
* Feedback: hamerly@cs.baylor.edu
* See: http://cs.baylor.edu/~hamerly/software/kmeans.php
* Copyright 2014
*/
#include "kmeans_dataset.h"
#include "kmeans.h"
#include "kmeans_general_functions.h"
#include <cassert>
#include <cmath>
#include <algorithm>
#include <numeric>
#include <cstring>
#include <cstdio>
void addVectors(double *a, double const *b, int d) {
double const *end = a + d;
while (a < end) {
*(a++) += *(b++);
}
}
void subVectors(double *a, double const *b, int d) {
double const *end = a + d;
while (a < end) {
*(a++) -= *(b++);
}
}
double distance2silent(double const *a, double const *b, int d) {
double d2 = 0.0, diff;
double const *end = a + d; // one past the last valid entry in a
while (a < end) {
diff = *(a++) - *(b++);
d2 += diff * diff;
}
return d2;
}
void centerDataset(Dataset *x) {
double *xCentroid = new double[x->d];
for (int d = 0; d < x->d; ++d) {
xCentroid[d] = 0.0;
}
for (int i = 0; i < x->n; ++i) {
addVectors(xCentroid, x->data + i * x->d, x->d);
}
// compute average (divide by n)
for (int d = 0; d < x->d; ++d) {
xCentroid[d] /= x->n;
}
// re-center the dataset
const double *xEnd = x->data + x->n * x->d;
for (double *xp = x->data; xp != xEnd; xp += x->d) {
subVectors(xp, xCentroid, x->d);
}
delete [] xCentroid;
}
Dataset *init_centers(Dataset const &x, unsigned short k) {
int *chosen_pts = new int[k];
Dataset *c = new Dataset(k, x.d);
for (int i = 0; i < k; ++i) {
bool acceptable = true;
do {
acceptable = true;
chosen_pts[i] = rand() % x.n;
for (int j = 0; j < i; ++j) {
if (chosen_pts[i] == chosen_pts[j]) {
acceptable = false;
break;
}
}
} while (! acceptable);
double *cdp = c->data + i * x.d;
memcpy(cdp, x.data + chosen_pts[i] * x.d, sizeof(double) * x.d);
if (c->sumDataSquared) {
c->sumDataSquared[i] = std::inner_product(cdp, cdp + x.d, cdp, 0.0);
}
}
delete [] chosen_pts;
return c;
}
Dataset *init_centers_kmeanspp(Dataset const &x, unsigned short k) {
int *chosen_pts = new int[k];
std::pair<double, int> *dist2 = new std::pair<double, int>[x.n];
double *distribution = new double[x.n];
// initialize dist2
for (int i = 0; i < x.n; ++i) {
dist2[i].first = std::numeric_limits<double>::max();
dist2[i].second = i;
}
// choose the first point randomly
int ndx = 1;
chosen_pts[ndx - 1] = rand() % x.n;
while (ndx < k) {
double sum_distribution = 0.0;
// look for the point that is furthest from any center
for (int i = 0; i < x.n; ++i) {
int example = dist2[i].second;
double d2 = 0.0, diff;
for (int j = 0; j < x.d; ++j) {
diff = x(example,j) - x(chosen_pts[ndx - 1],j);
d2 += diff * diff;
}
if (d2 < dist2[i].first) {
dist2[i].first = d2;
}
sum_distribution += dist2[i].first;
}
// sort the examples by their distance from centers
sort(dist2, dist2 + x.n);
// turn distribution into a CDF
distribution[0] = dist2[0].first / sum_distribution;
for (int i = 1; i < x.n; ++i) {
distribution[i] = distribution[i - 1] + dist2[i].first / sum_distribution;
}
// choose a random interval according to the new distribution
double r = (double)rand() / (double)RAND_MAX;
double *new_center_ptr = std::lower_bound(distribution, distribution + x.n, r);
int distribution_ndx = (int)(new_center_ptr - distribution);
chosen_pts[ndx] = dist2[distribution_ndx].second;
/*
cout << "chose " << distribution_ndx << " which is actually "
<< chosen_pts[ndx] << " with distance "
<< dist2[distribution_ndx].first << std::endl;
*/
++ndx;
}
Dataset *c = new Dataset(k, x.d);
for (int i = 0; i < k; ++i) {
double *cdp = c->data + i * x.d;
memcpy(cdp, x.data + chosen_pts[i] * x.d, sizeof(double) * x.d);
if (c->sumDataSquared) {
c->sumDataSquared[i] = std::inner_product(cdp, cdp + x.d, cdp, 0.0);
}
}
delete [] chosen_pts;
delete [] dist2;
delete [] distribution;
return c;
}
Dataset *init_centers_kmeanspp_v2(Dataset const &x, unsigned short k) {
int *chosen_pts = new int[k];
std::pair<double, int> *dist2 = new std::pair<double, int>[x.n];
// initialize dist2
for (int i = 0; i < x.n; ++i) {
dist2[i].first = std::numeric_limits<double>::max();
dist2[i].second = i;
}
// choose the first point randomly
int ndx = 1;
chosen_pts[ndx - 1] = rand() % x.n;
while (ndx < k) {
double sum_distribution = 0.0;
// look for the point that is furthest from any center
double max_dist = 0.0;
for (int i = 0; i < x.n; ++i) {
int example = dist2[i].second;
double d2 = 0.0, diff;
for (int j = 0; j < x.d; ++j) {
diff = x(example,j) - x(chosen_pts[ndx - 1],j);
d2 += diff * diff;
}
if (d2 < dist2[i].first) {
dist2[i].first = d2;
}
if (dist2[i].first > max_dist) {
max_dist = dist2[i].first;
}
sum_distribution += dist2[i].first;
}
bool unique = true;
do {
// choose a random interval according to the new distribution
double r = sum_distribution * (double)rand() / (double)RAND_MAX;
double sum_cdf = dist2[0].first;
int cdf_ndx = 0;
while (sum_cdf < r) {
sum_cdf += dist2[++cdf_ndx].first;
}
chosen_pts[ndx] = cdf_ndx;
for (int i = 0; i < ndx; ++i) {
unique = unique && (chosen_pts[ndx] != chosen_pts[i]);
}
} while (! unique);
++ndx;
}
Dataset *c = new Dataset(k, x.d);
for (int i = 0; i < c->n; ++i) {
double *cdp = c->data + i * x.d;
memcpy(cdp, x.data + chosen_pts[i] * x.d, sizeof(double) * x.d);
if (c->sumDataSquared) {
c->sumDataSquared[i] = std::inner_product(cdp, cdp + x.d, cdp, 0.0);
}
}
delete [] chosen_pts;
delete [] dist2;
return c;
}
void kmeans_assign(Dataset const &x, Dataset const &c, unsigned short *assignment) {
for (int i = 0; i < x.n; ++i) {
double shortestDist2 = std::numeric_limits<double>::max();
int closest = 0;
for (int j = 0; j < c.n; ++j) {
double d2 = 0.0, *a = x.data + i * x.d, *b = c.data + j * x.d;
for (; a != x.data + (i + 1) * x.d; ++a, ++b) {
d2 += (*a - *b) * (*a - *b);
}
if (d2 < shortestDist2) {
shortestDist2 = d2;
closest = j;
}
}
assignment[i] = closest;
}
}

View File

@@ -1,83 +0,0 @@
#ifndef GENERAL_KMEANS_FUNCTIONS_H
#define GENERAL_KMEANS_FUNCTIONS_H
/* Authors: Greg Hamerly and Jonathan Drake
* Feedback: hamerly@cs.baylor.edu
* See: http://cs.baylor.edu/~hamerly/software/kmeans.php
* Copyright 2014
*
* Generally useful functions.
*/
#include <iostream>
#include <string>
#include "kmeans_dataset.h"
/* Add together two vectors, and put the result in the first argument.
* Calculates a = a + b
*
* Parameters:
* a -- vector to add, and the result of the operation
* b -- vector to add to a
* d -- the dimension
* Return value: none
*/
void addVectors(double *a, double const *b, int d);
/* Subtract two vectors, and put the result in the first argument. Calculates
* a = a - b
*
* Parameters:
* a -- vector to subtract from, and the result of the operation
* b -- vector to subtract
* d -- the dimension
* Return value: none
*/
void subVectors(double *a, double const *b, int d);
/* Initialize the centers randomly. Choose random records from x as the initial
* values for the centers. Assumes that c uses the sumDataSquared field.
*
* Parameters:
* x -- records that are being clustered (n * d)
* c -- centers to be initialized. Should be pre-allocated with the number of
* centers desired, and dimension.
* Return value: none
*/
Dataset *init_centers(Dataset const &x, unsigned short k);
/* Initialize the centers randomly using K-means++.
*
* Parameters:
* x -- records that are being clustered (n * d)
* c -- centers to be initialized. Should be pre-allocated with the number of
* centers desired, and dimension.
* Return value: none
*/
Dataset *init_centers_kmeanspp(Dataset const &x, unsigned short k);
Dataset *init_centers_kmeanspp_v2(Dataset const &x, unsigned short k);
/* Print an array (templated). Convenience function.
*
* Parameters:
* arr -- the array to print
* length -- the length of the array
* separator -- the string to put between each pair of printed elements
* Return value: none
*/
template <class T>
void printArray(T const *arr, int length, std::string separator) {
for (int i = 0; i < length; ++i) {
if (i > 0) {
std::cout << separator;
}
std::cout << arr[i];
}
}
void centerDataset(Dataset *x);
void kmeans_assign(Dataset const &x, Dataset const &c, unsigned short *assignment);
#endif

View File

@@ -1,106 +0,0 @@
/* Authors: Greg Hamerly and Jonathan Drake
* Feedback: hamerly@cs.baylor.edu
* See: http://cs.baylor.edu/~hamerly/software/kmeans.php
* Copyright 2014
*/
#include "original_space_kmeans.h"
#include "kmeans_general_functions.h"
#include <cmath>
#include <cassert>
#include <numeric>
OriginalSpaceKmeans::OriginalSpaceKmeans() : centers(NULL), sumNewCenters(NULL) { }
void OriginalSpaceKmeans::free() {
for (int t = 0; t < numThreads; ++t) {
delete sumNewCenters[t];
}
Kmeans::free();
delete centers;
delete [] sumNewCenters;
centers = NULL;
sumNewCenters = NULL;
}
/* This method moves the newCenters to their new locations, based on the
* sufficient statistics in sumNewCenters. It also computes the centerMovement
* and the center that moved the furthest.
*
* Parameters: none
*
* Return value: index of the furthest-moving centers
*/
int OriginalSpaceKmeans::move_centers() {
int furthestMovingCenter = 0;
for (int j = 0; j < k; ++j) {
centerMovement[j] = 0.0;
int totalClusterSize = 0;
for (int t = 0; t < numThreads; ++t) {
totalClusterSize += clusterSize[t][j];
}
if (totalClusterSize > 0) {
for (int dim = 0; dim < d; ++dim) {
double z = 0.0;
for (int t = 0; t < numThreads; ++t) {
z += (*sumNewCenters[t])(j,dim);
}
z /= totalClusterSize;
centerMovement[j] += (z - (*centers)(j, dim)) * (z - (*centers)(j, dim));
(*centers)(j, dim) = z;
}
}
centerMovement[j] = sqrt(centerMovement[j]);
if (centerMovement[furthestMovingCenter] < centerMovement[j]) {
furthestMovingCenter = j;
}
}
#ifdef COUNT_DISTANCES
numDistances += k;
#endif
return furthestMovingCenter;
}
void OriginalSpaceKmeans::initialize(Dataset const *aX, unsigned short aK, unsigned short *initialAssignment, int aNumThreads) {
Kmeans::initialize(aX, aK, initialAssignment, aNumThreads);
centers = new Dataset(k, d);
sumNewCenters = new Dataset *[numThreads];
centers->fill(0.0);
for (int t = 0; t < numThreads; ++t) {
sumNewCenters[t] = new Dataset(k, d, false);
sumNewCenters[t]->fill(0.0);
for (int i = start(t); i < end(t); ++i) {
addVectors(sumNewCenters[t]->data + assignment[i] * d, x->data + i * d, d);
}
}
// put the centers at their initial locations, based on clusterSize and
// sumNewCenters
move_centers();
}
void OriginalSpaceKmeans::changeAssignment(int xIndex, int closestCluster, int threadId) {
unsigned short oldAssignment = assignment[xIndex];
Kmeans::changeAssignment(xIndex, closestCluster, threadId);
double *xp = x->data + xIndex * d;
subVectors(sumNewCenters[threadId]->data + oldAssignment * d, xp, d);
addVectors(sumNewCenters[threadId]->data + closestCluster * d, xp, d);
}
double OriginalSpaceKmeans::pointPointInnerProduct(int x1, int x2) const {
return std::inner_product(x->data + x1 * d, x->data + (x1 + 1) * d, x->data + x2 * d, 0.0);
}
double OriginalSpaceKmeans::pointCenterInnerProduct(int xndx, unsigned short cndx) const {
return std::inner_product(x->data + xndx * d, x->data + (xndx + 1) * d, centers->data + cndx * d, 0.0);
}
double OriginalSpaceKmeans::centerCenterInnerProduct(unsigned short c1, unsigned short c2) const {
return std::inner_product(centers->data + c1 * d, centers->data + (c1 + 1) * d, centers->data + c2 * d, 0.0);
}

View File

@@ -1,54 +0,0 @@
#ifndef ORIGINAL_SPACE_KMEANS_H
#define ORIGINAL_SPACE_KMEANS_H
/* Authors: Greg Hamerly and Jonathan Drake
* Feedback: hamerly@cs.baylor.edu
* See: http://cs.baylor.edu/~hamerly/software/kmeans.php
* Copyright 2014
*
* OriginalSpaceKmeans is a base class for other algorithms that operate in the
* same space as the data being clustered (as opposed to kernelized k-means
* algorithms, which operate in kernel space).
*/
#include "kmeans.h"
/* Cluster with the cluster centers living in the original space (with the
* data). This is as opposed to a kernelized version of k-means, where the
* center points might not be explicitly represented. This is also an abstract
* class.
*/
class OriginalSpaceKmeans : public Kmeans {
public:
OriginalSpaceKmeans();
virtual ~OriginalSpaceKmeans() { free(); }
virtual void free();
virtual void initialize(Dataset const *aX, unsigned short aK, unsigned short *initialAssignment, int aNumThreads);
virtual double pointPointInnerProduct(int x1ndx, int x2ndx) const;
virtual double pointCenterInnerProduct(int xndx, unsigned short cndx) const;
virtual double centerCenterInnerProduct(unsigned short c1ndx, unsigned short c2ndx) const;
virtual Dataset const *getCenters() const { return centers; }
protected:
// Move the centers to the average of their current assigned points,
// compute the distance moved by each center, and return the index of
// the furthest-moving center.
int move_centers();
virtual void changeAssignment(int xIndex, int closestCluster, int threadId);
// The set of centers we are operating on.
Dataset *centers;
// sumNewCenters and centerCount provide sufficient statistics to
// quickly calculate the changing locations of the centers. Whenever a
// point changes cluster membership, we subtract (add) it from (to) the
// row in sumNewCenters associated with its old (new) cluster. We also
// decrement (increment) centerCount for the old (new) cluster.
Dataset **sumNewCenters;
};
#endif

File diff suppressed because it is too large Load Diff

View File

@@ -1,79 +0,0 @@
/* Authors: Greg Hamerly and Jonathan Drake
* Feedback: hamerly@cs.baylor.edu
* See: http://cs.baylor.edu/~hamerly/software/kmeans.php
* Copyright 2014
*/
#include "triangle_inequality_base_kmeans.h"
#include "kmeans_general_functions.h"
#include <cassert>
#include <limits>
#include <cmath>
void TriangleInequalityBaseKmeans::free() {
OriginalSpaceKmeans::free();
delete [] s;
delete [] upper;
delete [] lower;
s = NULL;
upper = NULL;
lower = NULL;
}
/* This function computes the inter-center distances, keeping only the closest
* distances, and updates "s". After this, s[j] will contain the distance
* between center j and its closest other center, divided by two. The division
* here saves repeated work later, since we always will need the distance / 2.
*
* Parameters: none
*
* Return value: none
*/
// TODO: parallelize this
void TriangleInequalityBaseKmeans::update_s(int threadId) {
// initialize
for (int c1 = 0; c1 < k; ++c1) {
if (c1 % numThreads == threadId) {
s[c1] = std::numeric_limits<double>::max();
}
}
// compute inter-center squared distances between all pairs
for (int c1 = 0; c1 < k; ++c1) {
if (c1 % numThreads == threadId) {
for (int c2 = 0; c2 < k; ++c2) {
if (c2 == c1) {
continue;
}
double d2 = centerCenterDist2(c1, c2);
if (d2 < s[c1]) { s[c1] = d2; }
}
// take the root and divide by two
s[c1] = sqrt(s[c1]) / 2.0;
}
}
}
/* This function initializes the upper/lower bounds, assignment, centerCounts,
* and sumNewCenters. It sets the bounds to invalid values which will force the
* first iteration of k-means to set them correctly. NB: subclasses should set
* numLowerBounds appropriately before entering this function.
*
* Parameters: none
*
* Return value: none
*/
void TriangleInequalityBaseKmeans::initialize(Dataset const *aX, unsigned short aK, unsigned short *initialAssignment, int aNumThreads) {
OriginalSpaceKmeans::initialize(aX, aK, initialAssignment, aNumThreads);
s = new double[k];
upper = new double[n];
lower = new double[n * numLowerBounds];
// start with invalid bounds and assignments which will force the first
// iteration of k-means to do all its standard work
std::fill(s, s + k, 0.0);
std::fill(upper, upper + n, std::numeric_limits<double>::max());
std::fill(lower, lower + n * numLowerBounds, 0.0);
}

View File

@@ -1,43 +0,0 @@
#ifndef TRIANGLE_INEQUALITY_BASE_KMEANS_H
#define TRIANGLE_INEQUALITY_BASE_KMEANS_H
/* Authors: Greg Hamerly and Jonathan Drake
* Feedback: hamerly@cs.baylor.edu
* See: http://cs.baylor.edu/~hamerly/software/kmeans.php
* Copyright 2014
*
* This class is an abstract base class for several other algorithms that use
* upper & lower bounds to avoid distance calculations in k-means.
*/
#include "original_space_kmeans.h"
class TriangleInequalityBaseKmeans : public OriginalSpaceKmeans {
public:
TriangleInequalityBaseKmeans() : numLowerBounds(0), s(NULL), upper(NULL), lower(NULL) {}
virtual ~TriangleInequalityBaseKmeans() { free(); }
virtual void initialize(Dataset const *aX, unsigned short aK, unsigned short *initialAssignment, int aNumThreads);
virtual void free();
protected:
void update_s(int threadId);
// The number of lower bounds being used by this algorithm.
int numLowerBounds;
// Half the distance between each center and its closest other center.
double *s;
// One upper bound for each point on the distance between that point and
// its assigned (closest) center.
double *upper;
// Lower bound(s) for each point on the distance between that point and
// the centers being tracked for lower bounds, which may be 1 to k.
// Actual size is n * numLowerBounds.
double *lower;
};
#endif

View File

@@ -1,996 +0,0 @@
/* refactor of Steven Future's algorithm from original C to C++ class */
#include "Voronoi.h"
#include <math.h>
#include <stdio.h>
#include <stdlib.h>
Voronoi::Voronoi()
{
// old controls essentially in main.c
//
// the original source was written in such a way that you could
// update the "plotting functions" (line, circle etc) with your
// own code to implement a plot.
//
// we have adapted the "line" function to record lines in the
// output vector, so we can draw them on a plot
//
// testing has suggested that these kinds of diagrams only work
// well when there are lots of cells ie, running kmeans with
// 30 or even 100 clusters.
//
triangulate = 0; // tesselate (we don't support this)
plot = 1; // call "plotting functions" - we use this
debug = 0; // set to 1 to get lots of debug
// malloc lists are maintained and zapped in constructors
freeinit(&sfl, sizeof(Site));
}
Voronoi::~Voronoi()
{
// wipe out the malloc list
foreach(void *m, malloclist) free(m);
}
// add a site to the list, refactoring what used to be in main.c
void
Voronoi::addSite(QPointF point)
{
// as originally in main.c
Site *p = (Site*)getfree(&sfl);
// initialise the site details
p->coord.x = point.x();
p->coord.y = point.y();
p->refcnt=0;
p->sitenbr=sites.count();
sites.append(p);
// keep tabs on xmin, xmax etc
if (sites.count() == 1) {
// first
xmin = point.x();
xmax = point.x();
ymin = point.y();
ymax = point.y();
} else {
// update
if (point.x() < xmin) xmin = point.x();
if (point.y() < ymin) ymin = point.y();
if (point.x() > xmax) xmax = point.x();
if (point.y() > ymax) ymax = point.y();
}
}
// sites need to be sorted, originally in main.c
static bool mySiteSort(const void * vs1, const void * vs2)
{
Point * s1 = (Point *)vs1 ;
Point * s2 = (Point *)vs2 ;
if (s1->y < s2->y)
{
return (true) ;
}
if (s1->y > s2->y)
{
return (false) ;
}
if (s1->x < s2->x)
{
return (true) ;
}
if (s1->x > s2->x)
{
return (false) ;
}
return (false) ;
}
/*** implicit parameters: nsites, sqrt_nsites, xmin, xmax, ymin, ymax,
: deltax, deltay (can all be estimates).
: Performance suffers if they are wrong; better to make nsites,
: deltax, and deltay too big than too small. (?)
***/
// main entry point, originally voronoi()
void
Voronoi::run(QRectF /* boundingRect */)
{
// need at least 2 sites to make any sense
if (sites.count() < 2) return;
// sort the sites
std::sort(sites.begin(), sites.end(), mySiteSort);
// and set the working variables used by the original sources
nsites=sites.count();
// was done in main.c previously
geominit();
plotinit();
// now into the original sources
Site *newsite, * bot, * top, * temp, * p, * v ;
Point newintstar ;
int pm ;
Halfedge * lbnd, * rbnd, * llbnd, * rrbnd, * bisector ;
Edge * e ;
int siteindex = 0; // start at first
PQinitialize() ;
bottomsite = this->sites[siteindex++];
out_site(bottomsite) ;
ELinitialize() ;
newsite = this->sites[siteindex++];
while (1)
{
if(!PQempty())
{
newintstar = PQ_min() ;
}
if (newsite != (Site *)NULL && (PQempty()
|| newsite -> coord.y < newintstar.y
|| (newsite->coord.y == newintstar.y
&& newsite->coord.x < newintstar.x))) {/* new site is
smallest */
{
out_site(newsite) ;
}
lbnd = ELleftbnd(&(newsite->coord)) ;
rbnd = ELright(lbnd) ;
bot = rightreg(lbnd) ;
e = bisect(bot, newsite) ;
bisector = HEcreate(e, voronoi_le) ;
ELinsert(lbnd, bisector) ;
p = intersect(lbnd, bisector) ;
if (p != (Site *)NULL)
{
PQdelete(lbnd) ;
PQinsert(lbnd, p, dist(p,newsite)) ;
}
lbnd = bisector ;
bisector = HEcreate(e, voronoi_re) ;
ELinsert(lbnd, bisector) ;
p = intersect(bisector, rbnd) ;
if (p != (Site *)NULL)
{
PQinsert(bisector, p, dist(p,newsite)) ;
}
newsite = siteindex < sites.count() ? sites[siteindex++] : NULL;
}
else if (!PQempty()) /* intersection is smallest */
{
lbnd = PQextractmin() ;
llbnd = ELleft(lbnd) ;
rbnd = ELright(lbnd) ;
rrbnd = ELright(rbnd) ;
bot = leftreg(lbnd) ;
top = rightreg(rbnd) ;
out_triple(bot, top, rightreg(lbnd)) ;
v = lbnd->vertex ;
makevertex(v) ;
endpoint(lbnd->ELedge, lbnd->ELpm, v);
endpoint(rbnd->ELedge, rbnd->ELpm, v) ;
ELdelete(lbnd) ;
PQdelete(rbnd) ;
ELdelete(rbnd) ;
pm = voronoi_le ;
if (bot->coord.y > top->coord.y)
{
temp = bot ;
bot = top ;
top = temp ;
pm = voronoi_re ;
}
e = bisect(bot, top) ;
bisector = HEcreate(e, pm) ;
ELinsert(llbnd, bisector) ;
endpoint(e, voronoi_re-pm, v) ;
deref(v) ;
p = intersect(llbnd, bisector) ;
if (p != (Site *) NULL)
{
PQdelete(llbnd) ;
PQinsert(llbnd, p, dist(p,bot)) ;
}
p = intersect(bisector, rrbnd) ;
if (p != (Site *) NULL)
{
PQinsert(bisector, p, dist(p,bot)) ;
}
}
else
{
break ;
}
}
for( lbnd = ELright(ELleftend) ;
lbnd != ELrightend ;
lbnd = ELright(lbnd))
{
e = lbnd->ELedge ;
out_ep(e) ;
}
}
void
Voronoi::ELinitialize(void)
{
int i ;
freeinit(&hfl, sizeof(Halfedge)) ;
ELhashsize = 2 * sqrt_nsites ;
ELhash = (Halfedge **)myalloc( sizeof(*ELhash) * ELhashsize) ;
for (i = 0 ; i < ELhashsize ; i++)
{
ELhash[i] = (Halfedge *)NULL ;
}
ELleftend = HEcreate((Edge *)NULL, 0) ;
ELrightend = HEcreate((Edge *)NULL, 0) ;
ELleftend->ELleft = (Halfedge *)NULL ;
ELleftend->ELright = ELrightend ;
ELrightend->ELleft = ELleftend ;
ELrightend->ELright = (Halfedge *)NULL ;
ELhash[0] = ELleftend ;
ELhash[ELhashsize-1] = ELrightend ;
}
Halfedge *
Voronoi::HEcreate(Edge * e, int pm)
{
Halfedge * answer ;
answer = (Halfedge *)getfree(&hfl) ;
answer->ELedge = e ;
answer->ELpm = pm ;
answer->PQnext = (Halfedge *)NULL ;
answer->vertex = (Site *)NULL ;
answer->ELrefcnt = 0 ;
return (answer) ;
}
void
Voronoi::ELinsert(Halfedge * lb, Halfedge * newone)
{
newone->ELleft = lb ;
newone->ELright = lb->ELright ;
(lb->ELright)->ELleft = newone ;
lb->ELright = newone ;
}
/* Get entry from hash table, pruning any deleted nodes */
Halfedge *
Voronoi::ELgethash(int b)
{
Halfedge * he ;
if ((b < 0) || (b >= ELhashsize))
{
return ((Halfedge *)NULL) ;
}
he = ELhash[b] ;
if ((he == (Halfedge *)NULL) || (he->ELedge != (Edge *)DELETED))
{
return (he) ;
}
/* Hash table points to deleted half edge. Patch as necessary. */
ELhash[b] = (Halfedge *)NULL ;
if ((--(he->ELrefcnt)) == 0)
{
makefree((Freenode *)he, (Freelist *)&hfl) ;
}
return ((Halfedge *)NULL) ;
}
Halfedge *
Voronoi::ELleftbnd(Point * p)
{
int i, bucket ;
Halfedge * he ;
/* Use hash table to get close to desired halfedge */
bucket = (p->x - xmin) / deltax * ELhashsize ;
if (bucket < 0)
{
bucket = 0 ;
}
if (bucket >= ELhashsize)
{
bucket = ELhashsize - 1 ;
}
he = ELgethash(bucket) ;
if (he == (Halfedge *)NULL)
{
for (i = 1 ; 1 ; i++)
{
if ((he = ELgethash(bucket-i)) != (Halfedge *)NULL)
{
break ;
}
if ((he = ELgethash(bucket+i)) != (Halfedge *)NULL)
{
break ;
}
}
totalsearch += i ;
}
ntry++ ;
/* Now search linear list of halfedges for the corect one */
if (he == ELleftend || (he != ELrightend && right_of(he,p)))
{
do {
he = he->ELright ;
} while (he != ELrightend && right_of(he,p)) ;
he = he->ELleft ;
}
else
{
do {
he = he->ELleft ;
} while (he != ELleftend && !right_of(he,p)) ;
}
/*** Update hash table and reference counts ***/
if ((bucket > 0) && (bucket < ELhashsize-1))
{
if (ELhash[bucket] != (Halfedge *)NULL)
{
(ELhash[bucket]->ELrefcnt)-- ;
}
ELhash[bucket] = he ;
(ELhash[bucket]->ELrefcnt)++ ;
}
return (he) ;
}
/*** This delete routine can't reclaim node, since pointers from hash
: table may be present.
***/
void
Voronoi::ELdelete(Halfedge * he)
{
(he->ELleft)->ELright = he->ELright ;
(he->ELright)->ELleft = he->ELleft ;
he->ELedge = (Edge *)DELETED ;
}
Halfedge *
Voronoi::ELright(Halfedge * he)
{
return (he->ELright) ;
}
Halfedge *
Voronoi::ELleft(Halfedge * he)
{
return (he->ELleft) ;
}
Site *
Voronoi::leftreg(Halfedge * he)
{
if (he->ELedge == (Edge *)NULL)
{
return(bottomsite) ;
}
return (he->ELpm == voronoi_le ? he->ELedge->reg[voronoi_le] :
he->ELedge->reg[voronoi_re]) ;
}
Site *
Voronoi::rightreg(Halfedge * he)
{
if (he->ELedge == (Edge *)NULL)
{
return(bottomsite) ;
}
return (he->ELpm == voronoi_le ? he->ELedge->reg[voronoi_re] :
he->ELedge->reg[voronoi_le]) ;
}
void
Voronoi::geominit(void)
{
freeinit(&efl, sizeof(Edge)) ;
nvertices = nedges = 0 ;
sqrt_nsites = sqrt(nsites+4) ;
deltay = ymax - ymin ;
deltax = xmax - xmin ;
}
Edge *
Voronoi::bisect(Site * s1, Site * s2)
{
float dx, dy, adx, ady ;
Edge * newedge ;
newedge = (Edge *)getfree(&efl) ;
newedge->reg[0] = s1 ;
newedge->reg[1] = s2 ;
ref(s1) ;
ref(s2) ;
newedge->ep[0] = newedge->ep[1] = (Site *)NULL ;
dx = s2->coord.x - s1->coord.x ;
dy = s2->coord.y - s1->coord.y ;
adx = dx>0 ? dx : -dx ;
ady = dy>0 ? dy : -dy ;
newedge->c = s1->coord.x * dx + s1->coord.y * dy + (dx*dx +
dy*dy) * 0.5 ;
if (adx > ady)
{
newedge->a = 1.0 ;
newedge->b = dy/dx ;
newedge->c /= dx ;
}
else
{
newedge->b = 1.0 ;
newedge->a = dx/dy ;
newedge->c /= dy ;
}
newedge->edgenbr = nedges ;
out_bisector(newedge) ;
nedges++ ;
return (newedge) ;
}
Site *
Voronoi::intersect(Halfedge * el1, Halfedge * el2)
{
Edge * e1, * e2, * e ;
Halfedge * el ;
float d, xint, yint ;
int right_of_site ;
Site * v ;
e1 = el1->ELedge ;
e2 = el2->ELedge ;
if ((e1 == (Edge*)NULL) || (e2 == (Edge*)NULL))
{
return ((Site *)NULL) ;
}
if (e1->reg[1] == e2->reg[1])
{
return ((Site *)NULL) ;
}
d = (e1->a * e2->b) - (e1->b * e2->a) ;
if ((-1.0e-10 < d) && (d < 1.0e-10))
{
return ((Site *)NULL) ;
}
xint = (e1->c * e2->b - e2->c * e1->b) / d ;
yint = (e2->c * e1->a - e1->c * e2->a) / d ;
if ((e1->reg[1]->coord.y < e2->reg[1]->coord.y) ||
(e1->reg[1]->coord.y == e2->reg[1]->coord.y &&
e1->reg[1]->coord.x < e2->reg[1]->coord.x))
{
el = el1 ;
e = e1 ;
}
else
{
el = el2 ;
e = e2 ;
}
right_of_site = (xint >= e->reg[1]->coord.x) ;
if ((right_of_site && (el->ELpm == voronoi_le)) ||
(!right_of_site && (el->ELpm == voronoi_re)))
{
return ((Site *)NULL) ;
}
v = (Site *)getfree(&sfl) ;
v->refcnt = 0 ;
v->coord.x = xint ;
v->coord.y = yint ;
return (v) ;
}
/*** returns 1 if p is to right of halfedge e ***/
int
Voronoi::right_of(Halfedge * el, Point * p)
{
Edge * e ;
Site * topsite ;
int right_of_site, above, fast ;
float dxp, dyp, dxs, t1, t2, t3, yl ;
e = el->ELedge ;
topsite = e->reg[1] ;
right_of_site = (p->x > topsite->coord.x) ;
if (right_of_site && (el->ELpm == voronoi_le))
{
return (1) ;
}
if(!right_of_site && (el->ELpm == voronoi_re))
{
return (0) ;
}
if (e->a == 1.0)
{
dyp = p->y - topsite->coord.y ;
dxp = p->x - topsite->coord.x ;
fast = 0 ;
if ((!right_of_site & (e->b < 0.0)) ||
(right_of_site & (e->b >= 0.0)))
{
fast = above = (dyp >= e->b*dxp) ;
}
else
{
above = ((p->x + p->y * e->b) > (e->c)) ;
if (e->b < 0.0)
{
above = !above ;
}
if (!above)
{
fast = 1 ;
}
}
if (!fast)
{
dxs = topsite->coord.x - (e->reg[0])->coord.x ;
above = (e->b * (dxp*dxp - dyp*dyp))
<
(dxs * dyp * (1.0 + 2.0 * dxp /
dxs + e->b * e->b)) ;
if (e->b < 0.0)
{
above = !above ;
}
}
}
else /*** e->b == 1.0 ***/
{
yl = e->c - e->a * p->x ;
t1 = p->y - yl ;
t2 = p->x - topsite->coord.x ;
t3 = yl - topsite->coord.y ;
above = ((t1*t1) > ((t2 * t2) + (t3 * t3))) ;
}
return (el->ELpm == voronoi_le ? above : !above) ;
}
void
Voronoi::endpoint(Edge * e, int lr, Site * s)
{
e->ep[lr] = s ;
ref(s) ;
if (e->ep[voronoi_re-lr] == (Site *)NULL)
{
return ;
}
out_ep(e) ;
deref(e->reg[voronoi_le]) ;
deref(e->reg[voronoi_re]) ;
makefree((Freenode *)e, (Freelist *) &efl) ;
}
float
Voronoi::dist(Site * s, Site * t)
{
float dx,dy ;
dx = s->coord.x - t->coord.x ;
dy = s->coord.y - t->coord.y ;
return (sqrt(dx*dx + dy*dy)) ;
}
void
Voronoi::makevertex(Site * v)
{
v->sitenbr = nvertices++ ;
out_vertex(v) ;
}
void
Voronoi::deref(Site * v)
{
if (--(v->refcnt) == 0 )
{
makefree((Freenode *)v, (Freelist *)&sfl) ;
}
}
void
Voronoi::ref(Site * v)
{
++(v->refcnt) ;
}
void
Voronoi::PQinsert(Halfedge * he, Site * v, float offset)
{
Halfedge * last, * next ;
he->vertex = v ;
ref(v) ;
he->ystar = v->coord.y + offset ;
last = &PQhash[ PQbucket(he)] ;
while ((next = last->PQnext) != (Halfedge *)NULL &&
(he->ystar > next->ystar ||
(he->ystar == next->ystar &&
v->coord.x > next->vertex->coord.x)))
{
last = next ;
}
he->PQnext = last->PQnext ;
last->PQnext = he ;
PQcount++ ;
}
void
Voronoi::PQdelete(Halfedge * he)
{
Halfedge * last;
if(he -> vertex != (Site *) NULL)
{
last = &PQhash[PQbucket(he)] ;
while (last -> PQnext != he)
{
last = last->PQnext ;
}
last->PQnext = he->PQnext;
PQcount-- ;
deref(he->vertex) ;
he->vertex = (Site *)NULL ;
}
}
int
Voronoi::PQbucket(Halfedge * he)
{
int bucket ;
if (he->ystar < ymin) bucket = 0;
else if (he->ystar >= ymax) bucket = PQhashsize-1;
else bucket = (he->ystar - ymin)/deltay * PQhashsize;
if (bucket < 0)
{
bucket = 0 ;
}
if (bucket >= PQhashsize)
{
bucket = PQhashsize-1 ;
}
if (bucket < PQmin)
{
PQmin = bucket ;
}
return (bucket);
}
int
Voronoi::PQempty(void)
{
return (PQcount == 0) ;
}
Point
Voronoi::PQ_min(void)
{
Point answer ;
while (PQhash[PQmin].PQnext == (Halfedge *)NULL)
{
++PQmin ;
}
answer.x = PQhash[PQmin].PQnext->vertex->coord.x ;
answer.y = PQhash[PQmin].PQnext->ystar ;
return (answer) ;
}
Halfedge *
Voronoi::PQextractmin(void)
{
Halfedge * curr ;
curr = PQhash[PQmin].PQnext ;
PQhash[PQmin].PQnext = curr->PQnext ;
PQcount-- ;
return (curr) ;
}
void
Voronoi::PQinitialize(void)
{
int i ;
PQcount = PQmin = 0 ;
PQhashsize = 4 * sqrt_nsites ;
PQhash = (Halfedge *)myalloc(PQhashsize * sizeof *PQhash) ;
for (i = 0 ; i < PQhashsize; i++)
{
PQhash[i].PQnext = (Halfedge *)NULL ;
}
}
void
Voronoi::freeinit(Freelist * fl, int size)
{
fl->head = (Freenode *)NULL ;
fl->nodesize = size ;
}
char *
Voronoi::getfree(Freelist * fl)
{
int i ;
Freenode * t ;
if (fl->head == (Freenode *)NULL)
{
t = (Freenode *) myalloc(100 * fl->nodesize) ;
for(i = 0 ; i < 100 ; i++)
{
makefree((Freenode *)((char *)t+i*fl->nodesize), fl) ;
}
}
t = fl->head ;
fl->head = (fl->head)->nextfree ;
return ((char *)t) ;
}
void
Voronoi::makefree(Freenode * curr, Freelist * fl)
{
curr->nextfree = fl->head ;
fl->head = curr ;
}
char *
Voronoi::myalloc(unsigned n)
{
char * t ;
if ((t=(char*)malloc(n)) == (char *) 0)
{
fprintf(stderr,"Insufficient memory processing site %d (%d bytes in use)\n",
siteidx, total_alloc) ;
return(0) ; // was exit(0) in original source, we aint having that here !!!
}
total_alloc += n ;
// keep tabs so we can zap in destructor
malloclist << t;
return (t) ;
}
void
Voronoi::openpl(void)
{
output.clear();
}
void
Voronoi::line(float ax, float ay, float bx, float by)
{
output << QLineF(QPointF(ax,ay), QPointF(bx,by));
}
void
Voronoi::circle(float ax, float ay, float radius)
{
}
void
Voronoi::range(float pxmin, float pxmax, float pymin, float pymax)
{
}
void
Voronoi::out_bisector(Edge * e)
{
if (triangulate && plot && !debug)
{
line(e->reg[0]->coord.x, e->reg[0]->coord.y,
e->reg[1]->coord.x, e->reg[1]->coord.y) ;
}
if (!triangulate && !plot && !debug)
{
printf("l %f %f %f\n", e->a, e->b, e->c) ;
}
if (debug)
{
printf("line(%d) %gx+%gy=%g, bisecting %d %d\n", e->edgenbr,
e->a, e->b, e->c, e->reg[voronoi_le]->sitenbr, e->reg[voronoi_re]->sitenbr) ;
}
}
void
Voronoi::out_ep(Edge * e)
{
if (!triangulate && plot)
{
clip_line(e) ;
}
if (!triangulate && !plot)
{
printf("e %d", e->edgenbr);
printf(" %d ", e->ep[voronoi_le] != (Site *)NULL ? e->ep[voronoi_le]->sitenbr : -1) ;
printf("%d\n", e->ep[voronoi_re] != (Site *)NULL ? e->ep[voronoi_re]->sitenbr : -1) ;
}
}
void
Voronoi::out_vertex(Site * v)
{
if (!triangulate && !plot && !debug)
{
printf ("v %f %f\n", v->coord.x, v->coord.y) ;
}
if (debug)
{
printf("vertex(%d) at %f %f\n", v->sitenbr, v->coord.x, v->coord.y) ;
}
}
void
Voronoi::out_site(Site * s)
{
if (!triangulate && plot && !debug)
{
circle (s->coord.x, s->coord.y, cradius) ;
}
if (!triangulate && !plot && !debug)
{
printf("s %f %f\n", s->coord.x, s->coord.y) ;
}
if (debug)
{
printf("site (%d) at %f %f\n", s->sitenbr, s->coord.x, s->coord.y) ;
}
}
void
Voronoi::out_triple(Site * s1, Site * s2, Site * s3)
{
if (triangulate && !plot && !debug)
{
printf("%d %d %d\n", s1->sitenbr, s2->sitenbr, s3->sitenbr) ;
}
if (debug)
{
printf("circle through left=%d right=%d bottom=%d\n",
s1->sitenbr, s2->sitenbr, s3->sitenbr) ;
}
}
void
Voronoi::plotinit(void)
{
float dx, dy, d ;
dy = ymax - ymin ;
dx = xmax - xmin ;
d = ( dx > dy ? dx : dy) * 1.1 ;
pxmin = xmin - (d-dx) / 2.0 ;
pxmax = xmax + (d-dx) / 2.0 ;
pymin = ymin - (d-dy) / 2.0 ;
pymax = ymax + (d-dy) / 2.0 ;
cradius = (pxmax - pxmin) / 350.0 ;
openpl() ;
range(pxmin, pymin, pxmax, pymax) ;
}
void
Voronoi::clip_line(Edge * e)
{
Site * s1, * s2 ;
float x1, x2, y1, y2 ;
if (e->a == 1.0 && e->b >= 0.0)
{
s1 = e->ep[1] ;
s2 = e->ep[0] ;
}
else
{
s1 = e->ep[0] ;
s2 = e->ep[1] ;
}
if (e->a == 1.0)
{
y1 = pymin ;
if (s1 != (Site *)NULL && s1->coord.y > pymin)
{
y1 = s1->coord.y ;
}
if (y1 > pymax)
{
return ;
}
x1 = e->c - e->b * y1 ;
y2 = pymax ;
if (s2 != (Site *)NULL && s2->coord.y < pymax)
{
y2 = s2->coord.y ;
}
if (y2 < pymin)
{
return ;
}
x2 = e->c - e->b * y2 ;
if (((x1 > pxmax) && (x2 > pxmax)) || ((x1 < pxmin) && (x2 < pxmin)))
{
return ;
}
if (x1 > pxmax)
{
x1 = pxmax ;
y1 = (e->c - x1) / e->b ;
}
if (x1 < pxmin)
{
x1 = pxmin ;
y1 = (e->c - x1) / e->b ;
}
if (x2 > pxmax)
{
x2 = pxmax ;
y2 = (e->c - x2) / e->b ;
}
if (x2 < pxmin)
{
x2 = pxmin ;
y2 = (e->c - x2) / e->b ;
}
}
else
{
x1 = pxmin ;
if (s1 != (Site *)NULL && s1->coord.x > pxmin)
{
x1 = s1->coord.x ;
}
if (x1 > pxmax)
{
return ;
}
y1 = e->c - e->a * x1 ;
x2 = pxmax ;
if (s2 != (Site *)NULL && s2->coord.x < pxmax)
{
x2 = s2->coord.x ;
}
if (x2 < pxmin)
{
return ;
}
y2 = e->c - e->a * x2 ;
if (((y1 > pymax) && (y2 > pymax)) || ((y1 < pymin) && (y2 <pymin)))
{
return ;
}
if (y1> pymax)
{
y1 = pymax ;
x1 = (e->c - y1) / e->a ;
}
if (y1 < pymin)
{
y1 = pymin ;
x1 = (e->c - y1) / e->a ;
}
if (y2 > pymax)
{
y2 = pymax ;
x2 = (e->c - y2) / e->a ;
}
if (y2 < pymin)
{
y2 = pymin ;
x2 = (e->c - y2) / e->a ;
}
}
line(x1,y1,x2,y2);
}

View File

@@ -1,164 +0,0 @@
/* refactor of Steven Future's algorithm from original C to C++ class */
#include <QList>
#include <QRectF>
#include <QPointF>
#include <QLineF>
#ifndef __VDEFS_H
#define __VDEFS_H 1
#ifndef NULL
#define NULL 0
#endif
#define DELETED -2
typedef struct tagFreenode
{
struct tagFreenode * nextfree;
} Freenode ;
typedef struct tagFreelist
{
Freenode * head;
int nodesize;
} Freelist ;
typedef struct tagPoint
{
float x ;
float y ;
} Point ;
/* structure used both for sites and for vertices */
typedef struct tagSite
{
Point coord ;
int sitenbr ;
int refcnt ;
} Site ;
typedef struct tagEdge
{
float a, b, c ;
Site * ep[2] ;
Site * reg[2] ;
int edgenbr ;
} Edge ;
// renamed from original re and le as they clash
#define voronoi_le 0
#define voronoi_re 1
typedef struct tagHalfedge
{
struct tagHalfedge * ELleft ;
struct tagHalfedge * ELright ;
Edge * ELedge ;
int ELrefcnt ;
char ELpm ;
Site * vertex ;
float ystar ;
struct tagHalfedge * PQnext ;
} Halfedge ;
class Voronoi {
public:
Voronoi();
~Voronoi();
void addSite(QPointF point);
void run(QRectF boundingRect);
// the output is a vector of lines to draw
QList<QLineF> &lines() { return output; }
private:
// original global variables
int triangulate, plot, debug, nsites, siteidx ;
float xmin, xmax, ymin, ymax ;
int ELhashsize ;
Site * bottomsite ;
Freelist hfl ;
Halfedge * ELleftend, * ELrightend, **ELhash ;
int ntry, totalsearch ;
float deltax, deltay ;
int nedges, sqrt_nsites, nvertices ;
Freelist sfl ;
Freelist efl ;
int PQmin, PQcount, PQhashsize ;
Halfedge * PQhash ;
int total_alloc ;
float pxmin, pxmax, pymin, pymax, cradius;
// refactoring to Qt containers
QList<void *> malloclist; // keep tabs on all the malloc'ed memory
QList<Site*> sites;
QList<QLineF> output;
/*** implicit parameters: nsites, sqrt_nsites, xmin, xmax, ymin, ymax,
: deltax, deltay (can all be estimates).
: Performance suffers if they are wrong; better to make nsites,
: deltax, and deltay too big than too small. (?)
***/
// DCEL routines
void ELinitialize(void);
Halfedge * HEcreate(Edge * e, int pm);
void ELinsert(Halfedge * lb, Halfedge * newone);
Halfedge * ELgethash(int b);
Halfedge * ELleftbnd(Point * p);
void ELdelete(Halfedge * he);
Halfedge * ELright(Halfedge * he);
Halfedge * ELleft(Halfedge * he);
Site * leftreg(Halfedge * he);
Site * rightreg(Halfedge * he);
// geometry
void geominit(void);
Edge * bisect(Site * s1, Site * s2);
Site * intersect(Halfedge * el1, Halfedge * el2);
int right_of(Halfedge * el, Point * p);
void endpoint(Edge * e, int lr, Site * s);
float dist(Site * s, Site * t);
void makevertex(Site * v);
void deref(Site * v);
void ref(Site * v);
void PQinsert(Halfedge * he, Site * v, float offset);
void PQdelete(Halfedge * he);
int PQbucket(Halfedge * he);
int PQempty(void);
Point PQ_min(void);
Halfedge * PQextractmin(void);
void PQinitialize(void);
// c memory management functions
void freeinit(Freelist * fl, int size);
char * getfree(Freelist * fl);
void makefree(Freenode * curr, Freelist * fl);
char * myalloc(unsigned n);
// output functions (not used in GC)
void openpl(void);
void line(float ax, float ay, float bx, float by);
void circle(float ax, float ay, float radius);
void range(float pxmin, float pxmax, float pymin, float pymax);
void out_bisector(Edge * e);
void out_ep(Edge * e);
void out_vertex(Site * v);
void out_site(Site * s);
void out_triple(Site * s1, Site * s2, Site * s3);
void plotinit(void);
void clip_line(Edge * e);
};
#endif

View File

@@ -1,30 +0,0 @@
voronoi - compute Voronoi diagram or Delaunay triangulation
SYNOPSIS
voronoi [-s -t] <pointfile >outputfile
Voronoi reads the standard input for a set of points in the plane and writes either
the Voronoi diagram or the Delaunay triangulation to the standard output.
Each input line should consist of two real numbers, separated by white space.
If option
-t
is present, the Delaunay triangulation is produced.
Each output line is a triple
i j k
which are the indices of the three points in a Delaunay triangle. Points are
numbered starting at 0. If this option is not present, the
Voronoi diagram is produced. There are four output record types.
s a b
indicates that an input point at coordinates
l a b c
indicates a line with equation ax + by = c.
v a b
indicates a vertex at a b.
e l v1 v2
indicates a Voronoi segment which is a subsegment of line number l;
with endpoints numbered v1 and v2. If v1 or v2 is -1, the line
extends to infinity.
AUTHOR
Steve J. Fortune (1987) A Sweepline Algorithm for Voronoi Diagrams,
Algorithmica 2, 153-174.

View File

@@ -1,14 +0,0 @@
C=edgelist.c geometry.c heap.c main.c memory.c output.c voronoi.c
O=edgelist.o geometry.o heap.o main.o memory.o output.o voronoi.o
tt: voronoi t
./voronoi <t >tt
voronoi: $O
cc -o voronoi $O -lm
$O:vdefs.h
voronoi.tar : $C vdefs.h Makefile Doc t
tar -cf voronoi.tar $C vdefs.h Makefile Doc t
mailable: $C vdefs.h Makefile t
bundle $C vdefs.h Makefile t > mailable

View File

@@ -1,188 +0,0 @@
/*** EDGELIST.C ***/
#include "vdefs.h"
int ELhashsize ;
Site * bottomsite ;
Freelist hfl ;
Halfedge * ELleftend, * ELrightend, **ELhash ;
int ntry, totalsearch ;
void
ELinitialize(void)
{
int i ;
freeinit(&hfl, sizeof(Halfedge)) ;
ELhashsize = 2 * sqrt_nsites ;
ELhash = (Halfedge **)myalloc( sizeof(*ELhash) * ELhashsize) ;
for (i = 0 ; i < ELhashsize ; i++)
{
ELhash[i] = (Halfedge *)NULL ;
}
ELleftend = HEcreate((Edge *)NULL, 0) ;
ELrightend = HEcreate((Edge *)NULL, 0) ;
ELleftend->ELleft = (Halfedge *)NULL ;
ELleftend->ELright = ELrightend ;
ELrightend->ELleft = ELleftend ;
ELrightend->ELright = (Halfedge *)NULL ;
ELhash[0] = ELleftend ;
ELhash[ELhashsize-1] = ELrightend ;
}
Halfedge *
HEcreate(Edge * e, int pm)
{
Halfedge * answer ;
answer = (Halfedge *)getfree(&hfl) ;
answer->ELedge = e ;
answer->ELpm = pm ;
answer->PQnext = (Halfedge *)NULL ;
answer->vertex = (Site *)NULL ;
answer->ELrefcnt = 0 ;
return (answer) ;
}
void
ELinsert(Halfedge * lb, Halfedge * new)
{
new->ELleft = lb ;
new->ELright = lb->ELright ;
(lb->ELright)->ELleft = new ;
lb->ELright = new ;
}
/* Get entry from hash table, pruning any deleted nodes */
Halfedge *
ELgethash(int b)
{
Halfedge * he ;
if ((b < 0) || (b >= ELhashsize))
{
return ((Halfedge *)NULL) ;
}
he = ELhash[b] ;
if ((he == (Halfedge *)NULL) || (he->ELedge != (Edge *)DELETED))
{
return (he) ;
}
/* Hash table points to deleted half edge. Patch as necessary. */
ELhash[b] = (Halfedge *)NULL ;
if ((--(he->ELrefcnt)) == 0)
{
makefree((Freenode *)he, (Freelist *)&hfl) ;
}
return ((Halfedge *)NULL) ;
}
Halfedge *
ELleftbnd(Point * p)
{
int i, bucket ;
Halfedge * he ;
/* Use hash table to get close to desired halfedge */
bucket = (p->x - xmin) / deltax * ELhashsize ;
if (bucket < 0)
{
bucket = 0 ;
}
if (bucket >= ELhashsize)
{
bucket = ELhashsize - 1 ;
}
he = ELgethash(bucket) ;
if (he == (Halfedge *)NULL)
{
for (i = 1 ; 1 ; i++)
{
if ((he = ELgethash(bucket-i)) != (Halfedge *)NULL)
{
break ;
}
if ((he = ELgethash(bucket+i)) != (Halfedge *)NULL)
{
break ;
}
}
totalsearch += i ;
}
ntry++ ;
/* Now search linear list of halfedges for the corect one */
if (he == ELleftend || (he != ELrightend && right_of(he,p)))
{
do {
he = he->ELright ;
} while (he != ELrightend && right_of(he,p)) ;
he = he->ELleft ;
}
else
{
do {
he = he->ELleft ;
} while (he != ELleftend && !right_of(he,p)) ;
}
/*** Update hash table and reference counts ***/
if ((bucket > 0) && (bucket < ELhashsize-1))
{
if (ELhash[bucket] != (Halfedge *)NULL)
{
(ELhash[bucket]->ELrefcnt)-- ;
}
ELhash[bucket] = he ;
(ELhash[bucket]->ELrefcnt)++ ;
}
return (he) ;
}
/*** This delete routine can't reclaim node, since pointers from hash
: table may be present.
***/
void
ELdelete(Halfedge * he)
{
(he->ELleft)->ELright = he->ELright ;
(he->ELright)->ELleft = he->ELleft ;
he->ELedge = (Edge *)DELETED ;
}
Halfedge *
ELright(Halfedge * he)
{
return (he->ELright) ;
}
Halfedge *
ELleft(Halfedge * he)
{
return (he->ELleft) ;
}
Site *
leftreg(Halfedge * he)
{
if (he->ELedge == (Edge *)NULL)
{
return(bottomsite) ;
}
return (he->ELpm == le ? he->ELedge->reg[le] :
he->ELedge->reg[re]) ;
}
Site *
rightreg(Halfedge * he)
{
if (he->ELedge == (Edge *)NULL)
{
return(bottomsite) ;
}
return (he->ELpm == le ? he->ELedge->reg[re] :
he->ELedge->reg[le]) ;
}

View File

@@ -1,221 +0,0 @@
/*** GEOMETRY.C ***/
#include <math.h>
#include "vdefs.h"
float deltax, deltay ;
int nedges, sqrt_nsites, nvertices ;
Freelist efl ;
void
geominit(void)
{
freeinit(&efl, sizeof(Edge)) ;
nvertices = nedges = 0 ;
sqrt_nsites = sqrt(nsites+4) ;
deltay = ymax - ymin ;
deltax = xmax - xmin ;
}
Edge *
bisect(Site * s1, Site * s2)
{
float dx, dy, adx, ady ;
Edge * newedge ;
newedge = (Edge *)getfree(&efl) ;
newedge->reg[0] = s1 ;
newedge->reg[1] = s2 ;
ref(s1) ;
ref(s2) ;
newedge->ep[0] = newedge->ep[1] = (Site *)NULL ;
dx = s2->coord.x - s1->coord.x ;
dy = s2->coord.y - s1->coord.y ;
adx = dx>0 ? dx : -dx ;
ady = dy>0 ? dy : -dy ;
newedge->c = s1->coord.x * dx + s1->coord.y * dy + (dx*dx +
dy*dy) * 0.5 ;
if (adx > ady)
{
newedge->a = 1.0 ;
newedge->b = dy/dx ;
newedge->c /= dx ;
}
else
{
newedge->b = 1.0 ;
newedge->a = dx/dy ;
newedge->c /= dy ;
}
newedge->edgenbr = nedges ;
out_bisector(newedge) ;
nedges++ ;
return (newedge) ;
}
Site *
intersect(Halfedge * el1, Halfedge * el2)
{
Edge * e1, * e2, * e ;
Halfedge * el ;
float d, xint, yint ;
int right_of_site ;
Site * v ;
e1 = el1->ELedge ;
e2 = el2->ELedge ;
if ((e1 == (Edge*)NULL) || (e2 == (Edge*)NULL))
{
return ((Site *)NULL) ;
}
if (e1->reg[1] == e2->reg[1])
{
return ((Site *)NULL) ;
}
d = (e1->a * e2->b) - (e1->b * e2->a) ;
if ((-1.0e-10 < d) && (d < 1.0e-10))
{
return ((Site *)NULL) ;
}
xint = (e1->c * e2->b - e2->c * e1->b) / d ;
yint = (e2->c * e1->a - e1->c * e2->a) / d ;
if ((e1->reg[1]->coord.y < e2->reg[1]->coord.y) ||
(e1->reg[1]->coord.y == e2->reg[1]->coord.y &&
e1->reg[1]->coord.x < e2->reg[1]->coord.x))
{
el = el1 ;
e = e1 ;
}
else
{
el = el2 ;
e = e2 ;
}
right_of_site = (xint >= e->reg[1]->coord.x) ;
if ((right_of_site && (el->ELpm == le)) ||
(!right_of_site && (el->ELpm == re)))
{
return ((Site *)NULL) ;
}
v = (Site *)getfree(&sfl) ;
v->refcnt = 0 ;
v->coord.x = xint ;
v->coord.y = yint ;
return (v) ;
}
/*** returns 1 if p is to right of halfedge e ***/
int
right_of(Halfedge * el, Point * p)
{
Edge * e ;
Site * topsite ;
int right_of_site, above, fast ;
float dxp, dyp, dxs, t1, t2, t3, yl ;
e = el->ELedge ;
topsite = e->reg[1] ;
right_of_site = (p->x > topsite->coord.x) ;
if (right_of_site && (el->ELpm == le))
{
return (1) ;
}
if(!right_of_site && (el->ELpm == re))
{
return (0) ;
}
if (e->a == 1.0)
{
dyp = p->y - topsite->coord.y ;
dxp = p->x - topsite->coord.x ;
fast = 0 ;
if ((!right_of_site & (e->b < 0.0)) ||
(right_of_site & (e->b >= 0.0)))
{
fast = above = (dyp >= e->b*dxp) ;
}
else
{
above = ((p->x + p->y * e->b) > (e->c)) ;
if (e->b < 0.0)
{
above = !above ;
}
if (!above)
{
fast = 1 ;
}
}
if (!fast)
{
dxs = topsite->coord.x - (e->reg[0])->coord.x ;
above = (e->b * (dxp*dxp - dyp*dyp))
<
(dxs * dyp * (1.0 + 2.0 * dxp /
dxs + e->b * e->b)) ;
if (e->b < 0.0)
{
above = !above ;
}
}
}
else /*** e->b == 1.0 ***/
{
yl = e->c - e->a * p->x ;
t1 = p->y - yl ;
t2 = p->x - topsite->coord.x ;
t3 = yl - topsite->coord.y ;
above = ((t1*t1) > ((t2 * t2) + (t3 * t3))) ;
}
return (el->ELpm == le ? above : !above) ;
}
void
endpoint(Edge * e, int lr, Site * s)
{
e->ep[lr] = s ;
ref(s) ;
if (e->ep[re-lr] == (Site *)NULL)
{
return ;
}
out_ep(e) ;
deref(e->reg[le]) ;
deref(e->reg[re]) ;
makefree((Freenode *)e, (Freelist *) &efl) ;
}
float
dist(Site * s, Site * t)
{
float dx,dy ;
dx = s->coord.x - t->coord.x ;
dy = s->coord.y - t->coord.y ;
return (sqrt(dx*dx + dy*dy)) ;
}
void
makevertex(Site * v)
{
v->sitenbr = nvertices++ ;
out_vertex(v) ;
}
void
deref(Site * v)
{
if (--(v->refcnt) == 0 )
{
makefree((Freenode *)v, (Freelist *)&sfl) ;
}
}
void
ref(Site * v)
{
++(v->refcnt) ;
}

View File

@@ -1,118 +0,0 @@
/*** HEAP.C ***/
#include "vdefs.h"
int PQmin, PQcount, PQhashsize ;
Halfedge * PQhash ;
void
PQinsert(Halfedge * he, Site * v, float offset)
{
Halfedge * last, * next ;
he->vertex = v ;
ref(v) ;
he->ystar = v->coord.y + offset ;
last = &PQhash[ PQbucket(he)] ;
while ((next = last->PQnext) != (Halfedge *)NULL &&
(he->ystar > next->ystar ||
(he->ystar == next->ystar &&
v->coord.x > next->vertex->coord.x)))
{
last = next ;
}
he->PQnext = last->PQnext ;
last->PQnext = he ;
PQcount++ ;
}
void
PQdelete(Halfedge * he)
{
Halfedge * last;
if(he -> vertex != (Site *) NULL)
{
last = &PQhash[PQbucket(he)] ;
while (last -> PQnext != he)
{
last = last->PQnext ;
}
last->PQnext = he->PQnext;
PQcount-- ;
deref(he->vertex) ;
he->vertex = (Site *)NULL ;
}
}
int
PQbucket(Halfedge * he)
{
int bucket ;
if (he->ystar < ymin) bucket = 0;
else if (he->ystar >= ymax) bucket = PQhashsize-1;
else bucket = (he->ystar - ymin)/deltay * PQhashsize;
if (bucket < 0)
{
bucket = 0 ;
}
if (bucket >= PQhashsize)
{
bucket = PQhashsize-1 ;
}
if (bucket < PQmin)
{
PQmin = bucket ;
}
return (bucket);
}
int
PQempty(void)
{
return (PQcount == 0) ;
}
Point
PQ_min(void)
{
Point answer ;
while (PQhash[PQmin].PQnext == (Halfedge *)NULL)
{
++PQmin ;
}
answer.x = PQhash[PQmin].PQnext->vertex->coord.x ;
answer.y = PQhash[PQmin].PQnext->ystar ;
return (answer) ;
}
Halfedge *
PQextractmin(void)
{
Halfedge * curr ;
curr = PQhash[PQmin].PQnext ;
PQhash[PQmin].PQnext = curr->PQnext ;
PQcount-- ;
return (curr) ;
}
void
PQinitialize(void)
{
int i ;
PQcount = PQmin = 0 ;
PQhashsize = 4 * sqrt_nsites ;
PQhash = (Halfedge *)myalloc(PQhashsize * sizeof *PQhash) ;
for (i = 0 ; i < PQhashsize; i++)
{
PQhash[i].PQnext = (Halfedge *)NULL ;
}
}

View File

@@ -1,161 +0,0 @@
/*** MAIN.C ***/
#include <stdio.h>
#include <stdlib.h> /* realloc(), qsort() */
#include "vdefs.h"
Site * readone(void), * nextone(void) ;
void readsites(void) ;
int sorted, triangulate, plot, debug, nsites, siteidx ;
float xmin, xmax, ymin, ymax ;
Site * sites ;
Freelist sfl ;
int
main(int argc, char *argv[])
{
int c ;
Site *(*next)() ;
sorted = triangulate = plot = debug = 0 ;
while ((c = getopt(argc, argv, "dpst")) != EOF)
{
switch(c)
{
case 'd':
debug = 1 ;
break ;
case 's':
sorted = 1 ;
break ;
case 't':
triangulate = 1 ;
break ;
case 'p':
plot = 1 ;
break ;
}
}
freeinit(&sfl, sizeof(Site)) ;
if (sorted)
{
scanf("%d %f %f %f %f", &nsites, &xmin, &xmax, &ymin, &ymax) ;
next = readone ;
}
else
{
readsites() ;
next = nextone ;
}
siteidx = 0 ;
geominit() ;
if (plot)
{
plotinit() ;
}
voronoi(next) ;
return (0) ;
}
/*** sort sites on y, then x, coord ***/
int
scomp(const void * vs1, const void * vs2)
{
Point * s1 = (Point *)vs1 ;
Point * s2 = (Point *)vs2 ;
if (s1->y < s2->y)
{
return (-1) ;
}
if (s1->y > s2->y)
{
return (1) ;
}
if (s1->x < s2->x)
{
return (-1) ;
}
if (s1->x > s2->x)
{
return (1) ;
}
return (0) ;
}
/*** return a single in-storage site ***/
Site *
nextone(void)
{
Site * s ;
if (siteidx < nsites)
{
s = &sites[siteidx++];
return (s) ;
}
else
{
return ((Site *)NULL) ;
}
}
/*** read all sites, sort, and compute xmin, xmax, ymin, ymax ***/
void
readsites(void)
{
int i ;
nsites = 0 ;
sites = (Site *) myalloc(4000 * sizeof(Site));
while (scanf("%f %f", &sites[nsites].coord.x,
&sites[nsites].coord.y) !=EOF)
{
sites[nsites].sitenbr = nsites ;
sites[nsites++].refcnt = 0 ;
if (nsites % 4000 == 0)
sites = (Site *)
realloc(sites,(nsites+4000)*sizeof(Site));
}
qsort((void *)sites, nsites, sizeof(Site), scomp) ;
xmin = sites[0].coord.x ;
xmax = sites[0].coord.x ;
for (i = 1 ; i < nsites ; ++i)
{
if(sites[i].coord.x < xmin)
{
xmin = sites[i].coord.x ;
}
if (sites[i].coord.x > xmax)
{
xmax = sites[i].coord.x ;
}
}
ymin = sites[0].coord.y ;
ymax = sites[nsites-1].coord.y ;
}
/*** read one site ***/
Site *
readone(void)
{
Site * s ;
s = (Site *)getfree(&sfl) ;
s->refcnt = 0 ;
s->sitenbr = siteidx++ ;
if (scanf("%f %f", &(s->coord.x), &(s->coord.y)) == EOF)
{
return ((Site *)NULL ) ;
}
return (s) ;
}

View File

@@ -1,57 +0,0 @@
/*** MEMORY.C ***/
#include <stdio.h>
#include <stdlib.h> /* malloc(), exit() */
#include "vdefs.h"
extern int sqrt_nsites, siteidx ;
void
freeinit(Freelist * fl, int size)
{
fl->head = (Freenode *)NULL ;
fl->nodesize = size ;
}
char *
getfree(Freelist * fl)
{
int i ;
Freenode * t ;
if (fl->head == (Freenode *)NULL)
{
t = (Freenode *) myalloc(sqrt_nsites * fl->nodesize) ;
for(i = 0 ; i < sqrt_nsites ; i++)
{
makefree((Freenode *)((char *)t+i*fl->nodesize), fl) ;
}
}
t = fl->head ;
fl->head = (fl->head)->nextfree ;
return ((char *)t) ;
}
void
makefree(Freenode * curr, Freelist * fl)
{
curr->nextfree = fl->head ;
fl->head = curr ;
}
int total_alloc ;
char *
myalloc(unsigned n)
{
char * t ;
if ((t=malloc(n)) == (char *) 0)
{
fprintf(stderr,"Insufficient memory processing site %d (%d bytes in use)\n",
siteidx, total_alloc) ;
return(0) ; // was exit(0) in original source, we aint having that here !!!
}
total_alloc += n ;
return (t) ;
}

View File

@@ -1,244 +0,0 @@
/*** OUTPUT.C ***/
#include <stdio.h>
#include "vdefs.h"
extern int triangulate, plot, debug ;
extern float ymax, ymin, xmax, xmin ;
float pxmin, pxmax, pymin, pymax, cradius;
void
openpl(void)
{
}
#pragma argsused
void
line(float ax, float ay, float bx, float by)
{
}
#pragma argsused
void
circle(float ax, float ay, float radius)
{
}
#pragma argsused
void
range(float pxmin, float pxmax, float pymin, float pymax)
{
}
void
out_bisector(Edge * e)
{
if (triangulate && plot && !debug)
{
line(e->reg[0]->coord.x, e->reg[0]->coord.y,
e->reg[1]->coord.x, e->reg[1]->coord.y) ;
}
if (!triangulate && !plot && !debug)
{
printf("l %f %f %f\n", e->a, e->b, e->c) ;
}
if (debug)
{
printf("line(%d) %gx+%gy=%g, bisecting %d %d\n", e->edgenbr,
e->a, e->b, e->c, e->reg[le]->sitenbr, e->reg[re]->sitenbr) ;
}
}
void
out_ep(Edge * e)
{
if (!triangulate && plot)
{
clip_line(e) ;
}
if (!triangulate && !plot)
{
printf("e %d", e->edgenbr);
printf(" %d ", e->ep[le] != (Site *)NULL ? e->ep[le]->sitenbr : -1) ;
printf("%d\n", e->ep[re] != (Site *)NULL ? e->ep[re]->sitenbr : -1) ;
}
}
void
out_vertex(Site * v)
{
if (!triangulate && !plot && !debug)
{
printf ("v %f %f\n", v->coord.x, v->coord.y) ;
}
if (debug)
{
printf("vertex(%d) at %f %f\n", v->sitenbr, v->coord.x, v->coord.y) ;
}
}
void
out_site(Site * s)
{
if (!triangulate && plot && !debug)
{
circle (s->coord.x, s->coord.y, cradius) ;
}
if (!triangulate && !plot && !debug)
{
printf("s %f %f\n", s->coord.x, s->coord.y) ;
}
if (debug)
{
printf("site (%d) at %f %f\n", s->sitenbr, s->coord.x, s->coord.y) ;
}
}
void
out_triple(Site * s1, Site * s2, Site * s3)
{
if (triangulate && !plot && !debug)
{
printf("%d %d %d\n", s1->sitenbr, s2->sitenbr, s3->sitenbr) ;
}
if (debug)
{
printf("circle through left=%d right=%d bottom=%d\n",
s1->sitenbr, s2->sitenbr, s3->sitenbr) ;
}
}
void
plotinit(void)
{
float dx, dy, d ;
dy = ymax - ymin ;
dx = xmax - xmin ;
d = ( dx > dy ? dx : dy) * 1.1 ;
pxmin = xmin - (d-dx) / 2.0 ;
pxmax = xmax + (d-dx) / 2.0 ;
pymin = ymin - (d-dy) / 2.0 ;
pymax = ymax + (d-dy) / 2.0 ;
cradius = (pxmax - pxmin) / 350.0 ;
openpl() ;
range(pxmin, pymin, pxmax, pymax) ;
}
void
clip_line(Edge * e)
{
Site * s1, * s2 ;
float x1, x2, y1, y2 ;
if (e->a == 1.0 && e->b >= 0.0)
{
s1 = e->ep[1] ;
s2 = e->ep[0] ;
}
else
{
s1 = e->ep[0] ;
s2 = e->ep[1] ;
}
if (e->a == 1.0)
{
y1 = pymin ;
if (s1 != (Site *)NULL && s1->coord.y > pymin)
{
y1 = s1->coord.y ;
}
if (y1 > pymax)
{
return ;
}
x1 = e->c - e->b * y1 ;
y2 = pymax ;
if (s2 != (Site *)NULL && s2->coord.y < pymax)
{
y2 = s2->coord.y ;
}
if (y2 < pymin)
{
return ;
}
x2 = e->c - e->b * y2 ;
if (((x1 > pxmax) && (x2 > pxmax)) || ((x1 < pxmin) && (x2 < pxmin)))
{
return ;
}
if (x1 > pxmax)
{
x1 = pxmax ;
y1 = (e->c - x1) / e->b ;
}
if (x1 < pxmin)
{
x1 = pxmin ;
y1 = (e->c - x1) / e->b ;
}
if (x2 > pxmax)
{
x2 = pxmax ;
y2 = (e->c - x2) / e->b ;
}
if (x2 < pxmin)
{
x2 = pxmin ;
y2 = (e->c - x2) / e->b ;
}
}
else
{
x1 = pxmin ;
if (s1 != (Site *)NULL && s1->coord.x > pxmin)
{
x1 = s1->coord.x ;
}
if (x1 > pxmax)
{
return ;
}
y1 = e->c - e->a * x1 ;
x2 = pxmax ;
if (s2 != (Site *)NULL && s2->coord.x < pxmax)
{
x2 = s2->coord.x ;
}
if (x2 < pxmin)
{
return ;
}
y2 = e->c - e->a * x2 ;
if (((y1 > pymax) && (y2 > pymax)) || ((y1 < pymin) && (y2 <pymin)))
{
return ;
}
if (y1> pymax)
{
y1 = pymax ;
x1 = (e->c - y1) / e->a ;
}
if (y1 < pymin)
{
y1 = pymin ;
x1 = (e->c - y1) / e->a ;
}
if (y2 > pymax)
{
y2 = pymax ;
x2 = (e->c - y2) / e->a ;
}
if (y2 < pymin)
{
y2 = pymin ;
x2 = (e->c - y2) / e->a ;
}
}
line(x1,y1,x2,y2);
}

View File

@@ -1,100 +0,0 @@
0.532095 0.894141
0.189043 0.613426
0.550977 0.415724
0.00397384 0.60576
0.89423 0.666812
0.0730728 0.740658
0.64018 0.926186
0.389914 0.553149
0.046918 0.172275
0.820327 0.578957
0.166575 0.597895
0.587999 0.824301
0.184717 0.0608049
0.264707 0.661072
0.564959 0.824897
0.986991 0.654621
0.214221 0.611877
0.997171 0.807318
0.233578 0.380796
0.209772 0.585171
0.631619 0.418295
0.441601 0.474479
0.246242 0.196578
0.243191 0.428592
0.129101 0.460463
0.808454 0.240363
0.23591 0.362678
0.841259 0.0182264
0.825533 0.867529
0.780973 0.282859
0.492706 0.0717757
0.0641069 0.0241644
0.711451 0.621806
0.532239 0.872561
0.264527 0.947361
0.984485 0.373498
0.890788 0.0900603
0.81489 0.765458
0.656357 0.383494
0.161836 0.878997
0.789622 0.367808
0.00529994 0.694075
0.751558 0.0541492
0.315169 0.989785
0.0675723 0.642346
0.144209 0.130059
0.755242 0.723929
0.0258396 0.306045
0.00905612 0.544864
0.0917369 0.0311395
0.000120247 0.760615
0.599014 0.406906
0.0209242 0.0676926
0.402961 0.743223
0.536965 0.776167
0.791622 0.4288
0.0492686 0.546021
0.321031 0.883358
0.45994 0.0493888
0.306635 0.920045
0.290264 0.480864
0.117081 0.709596
0.663268 0.827229
0.25703 0.908703
0.138396 0.712536
0.37325 0.578061
0.792062 0.598336
0.761925 0.679885
0.498106 0.0823257
0.0791993 0.879007
0.389481 0.161374
0.909555 0.33623
0.78771 0.527877
0.87391 0.282804
0.914291 0.579771
0.126212 0.635836
0.962689 0.412397
0.662097 0.205412
0.514842 0.35217
0.573771 0.571652
0.541641 0.302552
0.880047 0.447681
0.854456 0.455932
0.882323 0.00625933
0.0835167 0.817145
0.868329 0.54442
0.211671 0.598359
0.169315 0.4421
0.116072 0.753312
0.900911 0.0493624
0.889781 0.970528
0.209244 0.783234
0.0556217 0.973298
0.787673 0.0775736
0.327654 0.267293
0.571657 0.956988
0.519674 0.443726
0.0206049 0.472568
0.00635056 0.409455
0.414254 0.229849

View File

@@ -1,859 +0,0 @@
s 0.882323 0.006259
s 0.841259 0.018226
l 1.000000 -0.291425 0.858223
s 0.064107 0.024164
l 1.000000 -0.021883 0.472882
s 0.091737 0.031140
l 1.000000 0.252447 0.084903
s 0.900911 0.049362
l 0.431244 1.000000 0.412316
s 0.459940 0.049389
l 1.000000 -0.102110 0.668290
v 0.419582 -2.435700
l 1.000000 0.063725 0.264367
s 0.751558 0.054149
l 1.000000 -0.400473 0.781916
v 0.565847 -1.003264
e 4 0 1
l 1.000000 -0.081723 0.647837
v 0.324966 -0.950947
e 5 2 0
l 1.000000 0.049563 0.277834
s 0.184717 0.060805
l 1.000000 0.319051 0.152894
s 0.020924 0.067693
l -0.992063 1.000000 0.003750
v 0.869151 0.037499
e 0 1 3
l 1.000000 0.521961 0.888724
s 0.492706 0.071776
l 1.000000 0.683235 0.517715
v 0.300812 -0.463619
e 8 4 2
l 1.000000 -0.041479 0.320043
v 0.613461 -0.420642
e 7 1 5
l 1.000000 0.016324 0.606594
s 0.787673 0.077574
l 1.000000 0.648606 0.812334
v 0.793528 0.028994
e 6 5 6
l -0.902924 1.000000 -0.687501
s 0.498106 0.082326
l 0.511849 1.000000 0.330624
s 0.890788 0.090060
l -0.248734 1.000000 -0.153117
v 0.608770 -0.133270
e 14 7 5
l 1.000000 -0.068095 0.617845
v 0.857338 0.060132
e 11 8 3
l 0.689494 1.000000 0.651261
v 0.067141 0.070358
e 2 9 2
l 1.000000 -0.516194 0.030822
v 0.840710 0.071597
e 16 6 10
e 20 10 8
l 1.000000 0.121095 0.849380
s 0.144209 0.130059
l -0.584918 1.000000 -0.000765
v 0.129055 0.074721
e 9 11 4
l 0.530452 1.000000 0.143178
v 0.618791 0.013896
e 19 7 12
l 1.000000 -0.111171 0.617246
v 0.448758 0.100928
e 12 13 7
e 17 13 12
l 1.000000 0.862990 0.535857
s 0.389481 0.161374
l -0.629181 1.000000 -0.161838
v 0.082218 0.099566
e 21 9 14
e 24 14 11
l 1.000000 0.505873 0.132585
s 0.046918 0.172275
l 0.248549 1.000000 0.128415
v 0.321727 0.040586
e 13 4 15
l 1.000000 0.491146 0.341660
v 0.077349 0.109190
e 28 16 14
l 1.000000 -0.433915 0.029970
v 0.437804 0.113620
e 27 15 17
e 26 17 13
l 1.000000 -0.727717 0.355121
s 0.246242 0.196578
l 0.453146 1.000000 0.226335
v 0.218773 0.127199
e 23 11 18
l 1.000000 0.651936 0.301699
s 0.662097 0.205412
l -0.591428 1.000000 -0.288257
v 0.296482 0.091986
e 33 18 19
e 30 19 15
l 1.000000 -0.245771 0.273874
v 0.626385 0.082205
e 25 12 20
l 1.000000 0.750568 0.688085
v 0.722244 0.138898
e 35 20 21
e 15 21 6
l -0.982303 1.000000 -0.570564
s 0.414254 0.229849
l 0.361782 1.000000 0.341000
s 0.808454 0.240363
l 0.127656 1.000000 0.260846
v 0.830634 0.154811
e 22 22 10
l -0.547788 1.000000 -0.300201
v 0.477547 0.168232
e 32 17 23
l -0.568398 1.000000 -0.103205
v 0.749045 0.165226
e 38 21 24
e 40 24 22
l 1.000000 0.238806 0.788502
s 0.327654 0.267293
l -0.583720 1.000000 0.005031
v 0.321189 0.192515
e 36 19 25
l 1.000000 0.868607 0.488409
v 0.355334 0.212446
e 44 25 26
e 39 26 23
l 1.000000 -0.432379 0.263477
s 0.873910 0.282804
l 1.000000 0.648390 1.010790
s 0.780973 0.282859
l -0.646672 1.000000 -0.252308
v 0.889503 0.187059
e 41 22 27
l -0.087567 1.000000 0.109167
v 0.735216 0.223136
e 43 28 24
l 1.000000 0.651494 0.880588
s 0.541641 0.302552
l 0.197683 1.000000 0.295209
v 0.547788 0.186921
e 37 29 20
l 1.000000 -0.806436 0.397049
v 0.520067 0.192401
e 42 23 30
e 51 30 29
l 1.000000 0.570726 0.629875
s 0.025840 0.306045
l -0.157572 1.000000 0.233428
v 0.827441 0.282775
e 48 28 31
e 47 31 27
l 1.000000 -0.000592 0.827274
s 0.909555 0.336230
l 0.667185 1.000000 0.904467
s 0.514842 0.352170
l -0.540107 1.000000 0.042054
s 0.235910 0.362678
l -0.062204 1.000000 0.264632
v 0.245294 0.279890
e 45 32 25
l -0.961829 1.000000 0.043960
s 0.789622 0.367808
l 0.101814 1.000000 0.405288
v 0.138555 0.250245
e 31 16 33
e 34 33 18
l 1.000000 0.121927 0.169067
v 0.463117 0.292187
e 53 34 30
l 0.822328 1.000000 0.673020
s 0.984485 0.373498
l 1.000000 0.497371 1.123519
v 1.008234 0.231789
l 1.000000 0.820204 1.198348
v 0.137955 0.255166
e 61 36 33
l 1.000000 -0.496669 0.011222
v 1.034507 0.199756
e 64 35 37
e 49 27 37
l 0.330573 1.000000 0.541737
s 0.233578 0.380796
l -0.128712 1.000000 0.341523
v 0.827464 0.321040
e 55 31 38
l -0.991577 1.000000 -0.499454
s 0.656357 0.383494
l -0.032232 1.000000 0.273205
v 0.633847 0.293635
e 52 29 39
l 1.000000 0.705586 0.841031
v 0.688146 0.295385
e 69 39 40
e 50 40 28
l 1.000000 -0.807561 0.449605
v 0.147205 0.273789
e 65 36 41
e 58 41 32
l 1.000000 0.269591 0.221015
v 0.846367 0.339784
e 68 38 42
e 56 42 35
l 1.000000 -0.263297 0.756903
s 0.599014 0.406906
l 0.549792 1.000000 0.668290
v 0.603676 0.336394
e 70 43 39
l 1.000000 -0.408280 0.466333
s 0.006351 0.409455
l -0.188464 1.000000 0.354717
v 0.574582 0.352390
e 57 34 44
e 74 44 43
l 1.000000 0.650288 0.803737
v 0.717875 0.332198
e 71 40 45
e 60 45 38
l 1.000000 -0.117705 0.678773
s 0.962689 0.412397
l -0.560323 1.000000 -0.152575
s 0.550977 0.415724
l 0.568572 1.000000 0.686944
v 0.566469 0.364866
e 77 46 44
l 1.000000 -0.183567 0.499492
s 0.631619 0.418295
l -0.710842 1.000000 -0.056879
v 0.624295 0.386896
e 75 43 47
l 1.000000 0.349302 0.759439
v 0.937997 0.373006
e 63 48 35
l 0.697598 1.000000 1.027351
s 0.243191 0.428592
l 0.201126 1.000000 0.452639
s 0.791622 0.428800
l 0.032791 1.000000 0.424229
s 0.169315 0.442100
l 1.000000 -0.953955 -0.191056
v 0.409040 0.336655
e 46 26 49
e 62 49 34
l 1.000000 0.453432 0.561690
v 0.153601 0.361293
l -0.838496 1.000000 0.232499
s 0.519674 0.443726
l 1.000000 -0.894547 0.150916
v 0.507367 0.398470
e 80 51 46
l 0.052777 1.000000 0.425247
s 0.880047 0.447681
l -0.264762 1.000000 0.155046
v 0.129143 0.340785
e 72 52 41
e 89 52 50
l 1.000000 0.948281 0.452303
v 0.857504 0.382081
e 73 42 53
l 1.000000 0.883306 1.194998
v 0.906422 0.395032
e 92 53 54
e 84 54 48
l 1.000000 -0.426950 0.737763
s 0.854456 0.455932
l 1.000000 -0.322417 0.721581
v 0.848176 0.392641
e 94 55 53
l 0.735713 1.000000 1.016655
v 0.201987 0.412015
e 87 50 56
l 1.000000 -0.182847 0.126651
v 0.842804 0.396593
e 97 57 55
l 1.000000 0.431804 1.014055
v 1.199456 0.145229
e 66 37 58
e 18 8 58
l 0.257837 1.000000 0.454492
s 0.129101 0.460463
l 1.000000 -0.456632 -0.056862
v 0.108630 0.362417
e 93 59 52
l 0.668714 1.000000 0.435059
v 0.093729 0.372381
e 102 60 59
l 1.000000 0.415542 0.248470
v 0.725906 0.400426
e 78 45 61
e 86 61 57
l 1.000000 0.334943 0.860025
s 0.020605 0.472568
l 0.225854 1.000000 0.444055
s 0.441601 0.474479
l -0.598819 1.000000 0.126956
v 0.457785 0.401087
e 91 62 51
l 1.000000 -0.393900 0.299797
s 0.290264 0.480864
l 0.900540 1.000000 0.694927
v 0.396472 0.364372
e 88 63 49
e 106 63 62
l 0.549975 1.000000 0.582421
v 0.336883 0.384883
e 85 56 64
e 67 50 64
l 0.110462 1.000000 0.422096
v 0.070569 0.428117
e 103 65 60
l 1.000000 -0.111571 0.022803
v 0.345321 0.383951
e 110 64 66
l 0.459902 1.000000 0.542765
v 0.350844 0.381411
e 112 66 67
e 59 32 67
l -0.175071 1.000000 0.319989
v -0.258576 0.192683
e 29 68 16
e 54 68 36
l 0.020622 1.000000 0.187351
v 0.361953 0.383356
e 113 67 69
e 109 69 63
l 1.000000 -0.042191 0.345779
s 0.787710 0.527877
l -0.039484 1.000000 0.447159
v 0.710026 0.447837
e 82 47 70
e 104 70 61
l 1.000000 0.065655 0.739428
v 0.807207 0.479031
e 99 71 57
l -0.927737 1.000000 -0.269844
s 0.868329 0.544420
l 0.156778 1.000000 0.635223
s 0.009056 0.544864
l -0.159743 1.000000 0.506347
s 0.049269 0.546021
l 0.390232 1.000000 0.522928
v 0.881814 0.496974
e 96 55 72
l -0.121130 1.000000 0.390160
v 0.030149 0.511163
l 1.000000 0.028772 0.044856
v 0.077761 0.492583
e 121 73 74
e 111 65 74
l -0.933079 1.000000 0.420026
s 0.389914 0.553149
l -0.657010 1.000000 0.240657
v 0.834537 0.504386
e 118 71 75
e 119 75 72
l 1.000000 0.205200 0.938037
v 0.366080 0.481176
e 115 69 76
l 1.000000 0.725389 0.715120
v 0.589040 0.487825
e 81 46 77
e 83 77 47
l 1.000000 0.031881 0.604593
v 0.708234 0.475123
e 117 78 70
e 116 78 71
l 1.000000 0.702039 1.041789
s 0.573771 0.571652
l 0.422877 1.000000 0.738886
v 0.588980 0.489705
e 90 51 79
e 128 79 77
l 1.000000 -0.227174 0.477732
v 0.589004 0.489809
e 131 79 80
l -0.377211 1.000000 0.267630
v 0.217846 0.498748
e 98 56 81
e 108 81 66
l 1.000000 0.320499 0.377694
s 0.373250 0.578061
l -0.668914 1.000000 0.310359
s 0.820327 0.578957
l 0.638546 1.000000 1.066820
v 0.827561 0.538384
e 126 82 75
l 1.000000 -0.719491 0.440198
s 0.914291 0.579771
l 1.000000 0.769134 1.323637
v 0.936319 0.503577
e 122 72 83
l 0.259248 1.000000 0.746315
s 0.209772 0.585171
l 0.282776 1.000000 0.567234
v 0.215419 0.506318
e 133 84 81
l -0.771684 1.000000 0.340083
v 0.951126 0.499738
e 138 83 85
e 95 54 85
l -0.289161 1.000000 0.224710
v 0.179037 0.516606
e 101 59 86
e 139 86 84
l 0.646879 1.000000 0.632422
v 0.329909 0.531040
e 127 87 76
l 0.853792 1.000000 0.812714
s 0.166575 0.597895
l 0.272673 1.000000 0.569490
v 0.168173 0.523634
e 142 88 86
l 1.000000 -0.294558 0.013932
s 0.792062 0.598336
l 1.000000 -0.685617 0.402609
s 0.211671 0.598359
l 0.143994 1.000000 0.622108
v 0.788733 0.563178
e 135 89 82
l 0.061766 1.000000 0.611895
s 0.003974 0.605760
l -0.083458 1.000000 0.574768
v 0.506479 0.524707
e 107 62 90
e 130 90 80
l 1.000000 0.735212 0.892250
v 0.123960 0.535690
e 124 74 91
e 144 91 88
l 1.000000 0.442210 0.360847
s 0.214221 0.611877
l 0.188637 1.000000 0.645288
s 0.189043 0.613426
l -0.733639 1.000000 0.453005
v 0.187995 0.590926
e 145 88 92
l 1.000000 0.691250 0.596472
v 0.192680 0.594363
e 153 92 93
l 1.000000 -0.665858 -0.203082
v 0.028251 0.577126
e 123 94 73
l -0.758211 1.000000 0.555706
v 0.201304 0.607314
e 155 93 95
l 1.000000 -0.061522 0.163940
s 0.711451 0.621806
l -0.811879 1.000000 -0.033727
v 0.678654 0.517258
e 129 96 78
l 0.392274 1.000000 0.783476
v 0.738998 0.566250
e 158 96 97
e 148 97 89
l 1.000000 -0.291152 0.574133
v 0.670378 0.520504
e 132 80 98
e 159 98 96
l 1.000000 0.364280 0.859987
s 0.126212 0.635836
l 1.000000 -0.939994 -0.433456
v 0.106725 0.574665
e 151 99 91
l 0.856687 1.000000 0.666095
s 0.067572 0.642346
l 0.190020 1.000000 0.605285
v 0.867187 0.593460
e 136 82 100
e 137 100 83
l 1.000000 0.008662 0.872328
v 0.052285 0.595349
e 156 94 101
l 1.000000 0.575266 0.394770
v 0.091215 0.587952
e 164 101 102
e 163 102 99
l 1.000000 -0.111017 0.025942
v 0.290765 0.564461
e 140 84 103
e 143 103 87
l 1.000000 -0.043492 0.266215
v 0.482330 0.557553
e 125 76 104
e 150 104 90
l 1.000000 0.100638 0.538441
s 0.986991 0.654621
l 0.971275 1.000000 1.540530
s 0.264707 0.661072
l 1.000000 0.974428 0.859663
v 0.291447 0.580141
e 147 93 105
e 168 103 105
l 1.000000 -0.125623 0.218568
v -0.161545 0.480541
e 105 106 65
e 120 106 73
l 0.019981 1.000000 0.477313
v 0.282872 0.591927
e 152 95 107
l 0.845694 1.000000 0.831150
v 0.160033 0.631376
e 162 99 108
e 154 108 92
l 1.000000 -0.356671 -0.065160
v 0.291962 0.584240
e 174 107 109
e 172 105 109
l 1.000000 -0.764775 -0.154851
v 1.043941 0.526576
e 141 85 110
l 0.100329 1.000000 0.631313
s 0.894230 0.666812
l -0.230478 1.000000 0.414880
v 0.867003 0.614704
e 165 111 100
l 0.841193 1.000000 1.344021
s 0.761925 0.679885
l -0.369557 1.000000 0.351967
v 0.758188 0.632161
e 160 97 112
l 0.869057 1.000000 1.291070
v 0.936674 0.630762
e 178 111 113
e 170 113 110
l 1.000000 -0.131424 0.853777
s 0.005300 0.694075
l 0.015016 1.000000 0.649987
v 0.021036 0.649671
e 166 114 101
l 1.000000 -0.830689 -0.518639
v 0.839768 0.637614
e 146 89 115
e 179 115 111
l 1.000000 0.670229 1.267116
s 0.117081 0.709596
l -0.123793 1.000000 0.657657
s 0.138396 0.712536
l 0.158853 1.000000 0.695203
v 0.132837 0.674101
l 1.000000 0.137931 0.225817
v 0.100332 0.670078
e 167 102 117
e 186 117 116
l 0.736189 1.000000 0.743941
s 0.755242 0.723929
l -0.151735 1.000000 0.586804
v 0.172997 0.667722
e 187 116 118
e 175 108 118
l -0.511018 1.000000 0.579317
v 0.826502 0.657407
e 180 112 119
e 185 119 115
l 1.000000 -0.098810 0.761544
v 0.205257 0.671579
e 157 95 120
e 171 120 107
l 1.000000 0.629705 0.628154
s 0.073073 0.740658
l 1.000000 -0.705823 -0.416734
v 0.071305 0.691447
e 189 121 117
l 0.055949 1.000000 0.695436
v 0.475217 0.628239
e 134 87 122
e 169 122 104
l 1.000000 -0.031962 0.455137
s 0.402961 0.743223
l 0.179890 1.000000 0.730458
v 0.056430 0.692279
e 184 114 123
e 195 123 121
l 1.000000 0.687340 0.532261
v 0.199241 0.681133
e 191 118 124
e 193 124 120
l 1.000000 -0.407438 -0.078279
s 0.116072 0.753312
l -0.023081 1.000000 0.728763
v 0.124900 0.731646
e 188 125 116
l -0.547479 1.000000 0.663266
v 0.354953 0.666606
e 176 109 126
l 1.000000 0.594203 0.751052
v 1.186539 0.512269
e 177 110 127
e 79 48 127
l 0.008914 1.000000 0.522847
v 0.099261 0.731054
e 194 121 128
e 200 128 125
l 1.000000 0.294285 0.314399
s 0.000120 0.760615
l -0.077843 1.000000 0.727134
v 0.689921 0.691489
e 181 129 112
l 0.428807 1.000000 0.987331
s 0.814890 0.765458
l 0.618946 1.000000 1.210653
v 0.809479 0.709630
e 190 129 130
l 1.000000 0.696234 1.303547
v 0.830384 0.696690
e 207 130 131
e 192 119 131
l -0.804290 1.000000 0.028821
v 0.475748 0.644876
e 197 126 132
e 196 122 132
l -0.995565 1.000000 0.171238
v 0.030824 0.729533
e 198 133 123
l 1.000000 -0.273561 -0.168749
s 0.536965 0.776167
l -0.179967 1.000000 0.573961
s 0.209244 0.783234
l 1.000000 0.997883 0.920122
v 0.211183 0.710443
e 199 124 134
l -0.454012 1.000000 0.614563
v 0.493778 0.662825
e 210 132 135
l 1.000000 0.245843 0.656729
v 0.610858 0.683896
e 212 135 136
e 161 136 98
l 1.000000 -0.884661 0.005842
v 0.167016 0.754704
e 201 125 137
e 213 137 134
l 1.000000 0.321148 0.409387
s 0.997171 0.807318
l 0.066668 1.000000 0.797109
s 0.083517 0.817145
l 0.136545 1.000000 0.789592
v 0.085468 0.777922
e 204 138 128
l -0.510007 1.000000 0.734333
v 0.950210 0.733761
e 182 113 139
l 0.732645 1.000000 1.429928
s 0.587999 0.824301
l 1.000000 0.943176 1.317244
s 0.564959 0.824897
l 0.574471 1.000000 1.117044
v 0.575492 0.786441
l 1.000000 -0.025867 0.555149
s 0.663268 0.827229
l -0.890357 1.000000 0.144088
v 0.639225 0.713227
e 206 141 129
l -0.234555 1.000000 0.563294
v 0.636173 0.712511
e 216 136 142
e 226 142 141
l 1.000000 0.404282 0.924229
v 0.629386 0.729299
e 222 140 143
e 227 143 142
l 1.000000 0.038901 0.657756
v 0.045552 0.783372
e 211 133 144
e 219 144 138
l 1.000000 0.677847 0.576558
v 0.303894 0.752535
e 214 134 145
e 202 145 126
l 1.000000 -0.206544 0.148463
v 0.911624 0.762031
e 209 131 146
e 221 146 139
l 1.000000 0.229645 1.086621
s 0.825533 0.867529
l 0.104270 1.000000 0.902017
s 0.532239 0.872561
l -0.686472 1.000000 0.472131
v 0.149132 0.810391
e 220 138 147
e 217 147 137
l 1.000000 -0.269719 -0.069446
v 0.511453 0.823229
e 223 148 140
l -0.049028 1.000000 0.798154
s 0.161836 0.878997
l -0.495055 1.000000 0.739263
s 0.079199 0.879007
l -0.069791 1.000000 0.842398
v 0.149972 0.813508
e 234 147 149
l 1.000000 0.789742 0.792434
s 0.321031 0.883358
l -0.584650 1.000000 0.601649
v 0.310186 0.783000
e 230 145 150
l 1.000000 0.895668 1.011494
v 0.742780 0.805428
e 225 141 151
e 208 151 130
l 1.000000 -0.407402 0.414647
s 0.532095 0.894141
l -0.006673 1.000000 0.879800
v 0.120514 0.850809
e 238 152 149
l 1.000000 -0.000120 0.120412
v 0.901053 0.808064
e 231 153 146
l 1.000000 -0.350802 0.617582
s 0.257030 0.908703
l 0.380859 1.000000 0.934761
v 0.264479 0.834032
e 240 154 150
l 1.000000 -0.396010 -0.065806
v 0.750260 0.823787
e 241 151 155
e 232 155 153
l 1.000000 0.248359 0.954855
v 0.455024 0.820463
e 215 156 135
e 235 156 148
l 0.999537 1.000000 1.275275
v 0.577619 0.868650
e 233 148 157
e 224 140 157
l 1.000000 -0.865495 -0.174194
v 0.223193 0.849756
e 236 149 158
e 245 158 154
l 1.000000 0.312057 0.488366
s 0.306635 0.920045
l -0.392401 1.000000 0.778553
v 0.005291 0.842767
e 229 159 144
e 237 159 152
l 0.667943 1.000000 0.846301
v 0.287126 0.891222
e 246 154 160
l 1.000000 0.228646 0.490901
s 0.640180 0.926186
l -0.233314 1.000000 0.724651
v 0.623904 0.870217
e 228 161 143
l 0.512156 1.000000 1.189753
v 0.590680 0.883741
e 249 157 162
l -0.800458 1.000000 0.410927
s 0.264527 0.947361
l 0.193931 1.000000 0.978605
v 0.593340 0.885871
e 256 162 163
e 255 163 161
l 1.000000 0.296480 0.855983
v 0.279542 0.924393
e 253 164 160
l 1.000000 -0.648712 -0.320124
s 0.571657 0.956988
l 0.629497 1.000000 1.272969
v 0.588385 0.902582
e 258 165 163
l 1.000000 -0.449513 0.182663
v 0.425219 0.850254
e 239 150 166
e 248 166 156
l 1.000000 -0.051120 0.381753
s 0.889781 0.970528
l 0.623773 1.000000 1.454012
s 0.055622 0.973298
l -0.250051 1.000000 0.909296
v 0.925199 0.876897
e 244 153 167
l -0.657986 1.000000 0.268129
v 0.426875 0.882648
e 262 166 168
e 242 168 162
l 1.000000 0.051089 0.471968
s 0.315169 0.989785
l 0.122369 1.000000 0.992960
v 0.732439 0.895540
e 254 161 169
e 247 169 155
l 1.000000 -0.316461 0.449036
v 0.300192 0.956226
e 259 164 170
l 1.000000 0.837724 1.101245
v 0.194772 0.940833
e 250 171 158
e 257 171 164
l 1.000000 0.665725 0.821108
v 0.120525 0.939434
e 243 152 172
l 1.000000 -0.887837 -0.713539
v -0.800463 0.203858
e 114 173 68
e 76 173 60
l -0.042643 1.000000 0.237992
v -0.068622 0.892137
e 252 174 159
e 264 174 172
l 0.260959 1.000000 0.874230
v 0.416510 0.941992
e 267 170 175
e 251 160 175
l -0.055080 1.000000 0.919051
v 0.423822 0.942395
e 274 175 176
e 266 176 168
l 1.000000 -0.440906 0.008315
v 0.163488 0.987825
e 271 172 177
e 270 177 171
l 1.000000 -0.124157 0.040843
v 0.759290 0.980387
e 268 169 178
e 263 178 167
l 1.000000 0.177651 0.933457
v 0.445834 0.992318
e 275 176 179
e 260 179 165
l 1.000000 -0.127870 0.318946
v 0.177717 1.102425
e 276 177 180
e 269 180 170
l 1.000000 0.063522 0.247745
v -0.942148 0.496138
e 173 181 106
e 149 181 94
l -0.012107 1.000000 0.507545
v 0.720786 1.197126
e 261 165 182
e 277 182 178
l 1.000000 0.042562 0.771739
v -0.830797 0.662462
e 183 183 114
e 205 183 133
l -0.024885 1.000000 0.683137
v 4.748839 0.480514
e 218 139 184
e 203 127 184
l 0.029243 1.000000 0.619382
v 0.658662 2.656735
e 278 179 185
e 281 185 182
l 1.000000 -0.033513 0.569627
v -13.741838 0.341169
e 280 186 181
e 282 186 183
l -0.017742 1.000000 0.584978
e 1 -1 0
e 10 -1 9
e 272 -1 173
e 285 -1 186
e 273 -1 174
e 279 -1 180
e 284 185 -1
e 265 167 -1
e 283 184 -1
e 100 58 -1
e 3 3 -1

View File

@@ -1,132 +0,0 @@
#ifndef __VDEFS_H
#define __VDEFS_H
#ifndef NULL
#define NULL 0
#endif
#define DELETED -2
typedef struct tagFreenode
{
struct tagFreenode * nextfree;
} Freenode ;
typedef struct tagFreelist
{
Freenode * head;
int nodesize;
} Freelist ;
typedef struct tagPoint
{
float x ;
float y ;
} Point ;
/* structure used both for sites and for vertices */
typedef struct tagSite
{
Point coord ;
int sitenbr ;
int refcnt ;
} Site ;
typedef struct tagEdge
{
float a, b, c ;
Site * ep[2] ;
Site * reg[2] ;
int edgenbr ;
} Edge ;
#define le 0
#define re 1
typedef struct tagHalfedge
{
struct tagHalfedge * ELleft ;
struct tagHalfedge * ELright ;
Edge * ELedge ;
int ELrefcnt ;
char ELpm ;
Site * vertex ;
float ystar ;
struct tagHalfedge * PQnext ;
} Halfedge ;
/* edgelist.c */
void ELinitialize(void) ;
Halfedge * HEcreate(Edge *, int) ;
void ELinsert(Halfedge *, Halfedge *) ;
Halfedge * ELgethash(int) ;
Halfedge * ELleftbnd(Point *) ;
void ELdelete(Halfedge *) ;
Halfedge * ELright(Halfedge *) ;
Halfedge * ELleft(Halfedge *) ;
Site * leftreg(Halfedge *) ;
Site * rightreg(Halfedge *) ;
extern int ELhashsize ;
extern Site * bottomsite ;
extern Freelist hfl ;
extern Halfedge * ELleftend, * ELrightend, **ELhash ;
/* geometry.c */
void geominit(void) ;
Edge * bisect(Site *, Site *) ;
Site * intersect(Halfedge *, Halfedge *) ;
int right_of(Halfedge *, Point *) ;
void endpoint(Edge *, int, Site *) ;
float dist(Site *, Site *) ;
void makevertex(Site *) ;
void deref(Site *) ;
void ref(Site *) ;
extern float deltax, deltay ;
extern int nsites, nedges, sqrt_nsites, nvertices ;
extern Freelist sfl, efl ;
/* heap.c */
void PQinsert(Halfedge *, Site *, float) ;
void PQdelete(Halfedge *) ;
int PQbucket(Halfedge *) ;
int PQempty(void) ;
Point PQ_min(void) ;
Halfedge * PQextractmin(void) ;
void PQinitialize(void) ;
extern int PQmin, PQcount, PQhashsize ;
extern Halfedge * PQhash ;
/* main.c */
extern int sorted, triangulate, plot, debug, nsites, siteidx ;
extern float xmin, xmax, ymin, ymax ;
extern Site * sites ;
extern Freelist sfl ;
/* memory.c */
void freeinit(Freelist *, int) ;
char *getfree(Freelist *) ;
void makefree(Freenode *, Freelist *) ;
char *myalloc(unsigned) ;
/* output.c */
void openpl(void) ;
void line(float, float, float, float) ;
void circle(float, float, float) ;
void range(float, float, float, float) ;
void out_bisector(Edge *) ;
void out_ep(Edge *) ;
void out_vertex(Site *) ;
void out_site(Site *) ;
void out_triple(Site *, Site *, Site *) ;
void plotinit(void) ;
void clip_line(Edge *) ;
/* voronoi.c */
void voronoi(Site *(*)()) ;
#endif

View File

@@ -1,120 +0,0 @@
/*** VORONOI.C ***/
#include "vdefs.h"
extern Site * bottomsite ;
extern Halfedge * ELleftend, * ELrightend ;
/*** implicit parameters: nsites, sqrt_nsites, xmin, xmax, ymin, ymax,
: deltax, deltay (can all be estimates).
: Performance suffers if they are wrong; better to make nsites,
: deltax, and deltay too big than too small. (?)
***/
void
voronoi(Site *(*nextsite)(void))
{
Site * newsite, * bot, * top, * temp, * p, * v ;
Point newintstar ;
int pm ;
Halfedge * lbnd, * rbnd, * llbnd, * rrbnd, * bisector ;
Edge * e ;
PQinitialize() ;
bottomsite = (*nextsite)() ;
out_site(bottomsite) ;
ELinitialize() ;
newsite = (*nextsite)() ;
while (1)
{
if(!PQempty())
{
newintstar = PQ_min() ;
}
if (newsite != (Site *)NULL && (PQempty()
|| newsite -> coord.y < newintstar.y
|| (newsite->coord.y == newintstar.y
&& newsite->coord.x < newintstar.x))) {/* new site is
smallest */
{
out_site(newsite) ;
}
lbnd = ELleftbnd(&(newsite->coord)) ;
rbnd = ELright(lbnd) ;
bot = rightreg(lbnd) ;
e = bisect(bot, newsite) ;
bisector = HEcreate(e, le) ;
ELinsert(lbnd, bisector) ;
p = intersect(lbnd, bisector) ;
if (p != (Site *)NULL)
{
PQdelete(lbnd) ;
PQinsert(lbnd, p, dist(p,newsite)) ;
}
lbnd = bisector ;
bisector = HEcreate(e, re) ;
ELinsert(lbnd, bisector) ;
p = intersect(bisector, rbnd) ;
if (p != (Site *)NULL)
{
PQinsert(bisector, p, dist(p,newsite)) ;
}
newsite = (*nextsite)() ;
}
else if (!PQempty()) /* intersection is smallest */
{
lbnd = PQextractmin() ;
llbnd = ELleft(lbnd) ;
rbnd = ELright(lbnd) ;
rrbnd = ELright(rbnd) ;
bot = leftreg(lbnd) ;
top = rightreg(rbnd) ;
out_triple(bot, top, rightreg(lbnd)) ;
v = lbnd->vertex ;
makevertex(v) ;
endpoint(lbnd->ELedge, lbnd->ELpm, v);
endpoint(rbnd->ELedge, rbnd->ELpm, v) ;
ELdelete(lbnd) ;
PQdelete(rbnd) ;
ELdelete(rbnd) ;
pm = le ;
if (bot->coord.y > top->coord.y)
{
temp = bot ;
bot = top ;
top = temp ;
pm = re ;
}
e = bisect(bot, top) ;
bisector = HEcreate(e, pm) ;
ELinsert(llbnd, bisector) ;
endpoint(e, re-pm, v) ;
deref(v) ;
p = intersect(llbnd, bisector) ;
if (p != (Site *) NULL)
{
PQdelete(llbnd) ;
PQinsert(llbnd, p, dist(p,bot)) ;
}
p = intersect(bisector, rrbnd) ;
if (p != (Site *) NULL)
{
PQinsert(bisector, p, dist(p,bot)) ;
}
}
else
{
break ;
}
}
for( lbnd = ELright(ELleftend) ;
lbnd != ELrightend ;
lbnd = ELright(lbnd))
{
e = lbnd->ELedge ;
out_ep(e) ;
}
}

View File

@@ -48,7 +48,7 @@
* - so... hack around this with currentRequest + dispatchRequest TODO: fix
*/
const QString VELOHERO_URL( "https://app.velohero.com" );
const QString VELOHERO_URL( "http://app.velohero.com" );
class VeloHeroParser : public QXmlDefaultHandler
{

File diff suppressed because it is too large Load Diff

View File

@@ -1,2 +0,0 @@
dummy:
doxygen Doxyfile

Binary file not shown.

Binary file not shown.

Binary file not shown.

View File

@@ -60,18 +60,13 @@ isset(p1)
CONFIGURED PARAMETERS
config(cranklength)
config(cp)
config(aetp)
config(ftp)
config(w')
config(w)
config(pmax)
config(cv)
config(aetv)
config(sex)
config(dob)
config(scv)
config(height)
config(weight)
config(lthr)
config(aethr)
config(maxhr)
config(rhr)
config(units)

View File

@@ -1,18 +0,0 @@
; Sample measures.ini configuration file
; It's looked for in GC root folder, parallel to athlete's directories.
; Nutrition data group
[Nutrition]
; Name: translated identifier, defaults to the group identifier
Name=Nutrition
; Fields: list of identifiers, at least one is mandatory
Fields=Energy,CHO,PRO,FAT
; Names: translated identifiers, if present order and number must match Fields
Names=Calories,CHO,PRO,FAT
; MetricUnits: optional, if present order and number must match Fields
MetricUnits=kcal,g,g,g
; Valid headers for each field to be used in CSV import
Energy=Energy,Calories
CHO=CHO,Carbs
PRO=PRO,Protein
FAT=FAT,Fats

View File

@@ -1,3 +0,0 @@
Datetime,Calories,CHO,PRO,FAT,comment
2018-05-21T12:00:00Z,2900,400,100,100,Test Data
2018-05-22T12:00:00Z,3300,500,100,100,"Here the comment text contain commas itself, so it needs quotes"
1 Datetime Calories CHO PRO FAT comment
2 2018-05-21T12:00:00Z 2900 400 100 100 Test Data
3 2018-05-22T12:00:00Z 3300 500 100 100 Here the comment text contain commas itself, so it needs quotes

Binary file not shown.

Before

Width:  |  Height:  |  Size: 104 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 261 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 190 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 53 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 53 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 459 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 452 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 26 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 80 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 93 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 23 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 24 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 61 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 130 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 132 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 55 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 32 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 52 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 33 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 154 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 114 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 48 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 20 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 158 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 35 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 38 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 25 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 61 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 61 KiB

After

Width:  |  Height:  |  Size: 181 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 59 KiB

After

Width:  |  Height:  |  Size: 159 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 459 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 64 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 31 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 91 KiB

After

Width:  |  Height:  |  Size: 78 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 35 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 32 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 96 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 556 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 60 KiB

Some files were not shown because too many files have changed in this diff Show More