Updates README.md and creates README-local.md

This commit is contained in:
Patrick McDonagh
2018-03-14 17:02:14 -05:00
parent 7e324d4ab2
commit bf202dae5c
2 changed files with 227 additions and 107 deletions

95
README-local.md Normal file
View File

@@ -0,0 +1,95 @@
# POCloud Local Report Generators
Developed by Patrick McDonagh @patrickjmcd, Henry pump
## Setup
System variables must be set up for the script to run. Add the following lines to /etc/environment
```
SMTP_EMAIL="<yourSMTPemailAddress>"
SMTP_PASSWORD="<yourSMTPpassword>"
MESHIFY_USERNAME="<yourMeshifyUsername>"
MESHIFY_PASSWORD="<yourMeshifyPassword>"
```
Create a "files" in the script's directory.
### Install Python Modules
```
pip install xlsxwriter
```
## Configuration Files
The script relies heavily on configuration files based on the Meshify devicetype. To configure a device, create a file
named <devicetype>_channels.json file. The file should hold a JSON list.
### Example Configuration File
```
# testdevice_channels.json
[
{
"meshify_name": "yesterday_volume",
"vanity_name": "Yesteday Volume"
},
{
"meshify_name": "volume_flow",
"vanity_name": "Flow Rate"
},
...
]
```
## Recipients File
In order to send emails containing the reports, configure a recipients json file named <devicetype>_to.json. The
file should hold a JSON object.
### Example Recipients File
```
# testdevice_to.json
{
"Company 1 Name": [
"email1@company.com",
"email2@company.com"
],
"Company 2 Name": [
"email3@company2.com",
"email4@company2.com"
],
...
}
```
## Running the script
```
usage: reports_xlsx.py [-h] [-s] [-p CONFIG_PATH] [-o OUTPUT_PATH] deviceType
positional arguments:
deviceType Meshify device type
optional arguments:
-h, --help show this help message and exit
-s, --send Send emails to everyone in the _to.json file
-c CONFIG_PATH, --config-path CONFIG_PATH
The folder path that holds the configuration files
-o OUTPUT_PATH, --output-path OUTPUT_PATH
The folder path that holds the output files
```
## Configuring the script to be run via crontab
Open the crontab file with `crontab -e`.
Add the following contents:
```
00 07 * * * /usr/bin/python3 /home/ubuntu/POCloud-Scraper/reports_xlsx.py advvfdipp --send --config-path /home/ubuntu/POCloud-Scraper --output-path /home/ubuntu/POCloud-Scraper/files
01 07 * * * /usr/bin/python3 /home/ubuntu/POCloud-Scraper/reports_xlsx.py ipp --send --config-path /home/ubuntu/POCloud-Scraper --output-path /home/ubuntu/POCloud-Scraper/files
02 07 * * * /usr/bin/python3 /home/ubuntu/POCloud-Scraper/reports_xlsx.py abbflow --send --config-path /home/ubuntu/POCloud-Scraper --output-path /home/ubuntu/POCloud-Scraper/files
```

239
README.md
View File

@@ -1,128 +1,153 @@
# POCloud Report Generators
# POCloud Email Report Generator
Developed by Patrick McDonagh @patrickjmcd, Henry pump
Send daily reports of Meshify Data via AWS functions.
## Setup
## Using the Generator
System variables must be set up for the script to run. Add the following lines to /etc/environment
```
SMTP_EMAIL="<yourSMTPemailAddress>"
SMTP_PASSWORD="<yourSMTPpassword>"
MESHIFY_USERNAME="<yourMeshifyUsername>"
MESHIFY_PASSWORD="<yourMeshifyPassword>"
```
Reports will be generated on a schedule by AWS Lambda, a serverless, event-driven computing platform. Each report will contain all devices of a specified type that the user has been granted access to in Meshify. If a user has access to multiple device types and is configured to receive reports for multiple device types, the user will receive one report for each device type. The Lambda function will mark in red any data that is more than 24 hours old in order to denote devices that have not updated. Values reported are the latest values at the time of report generation (12:00 GMT / 07:00 CST by default).
Create a "files" in the script's directory.
If you would like to run the reports locally without the AWS Lambda Function, refer to [README-local.md](README-local.md)
### Install Python Modules
## Setting it up yourself!
```
pip install xlsxwriter
```
### Prerequisites
## Configuration Files
- Amazon Web Services account
- Sufficient knowledge of S3, Lambda, and SES within Amazon Web Services
- Python 3
The script relies heavily on configuration files based on the Meshify devicetype. To configure a device, create a file
named <devicetype>_channels.json file. The file should hold a JSON list.
### Preparing an S3 Bucket
### Example Configuration File
This section will show you how to configure the S3 Bucket within AWS. It assumes a strong knowledge of AWS platforms.
```
# testdevice_channels.json
1. Sign in to your AWS Console and open the S3 dashboard.
2. Create a bucket named "pocloud-email-reports". You may choose to name your bucket differently, but you must update the variable BUCKET_NAME within reports_s3_xlsx.py
3. Open the newly-created bucket and create 3 folders. These names cannot be changed without doing some serious hacking of the reports_s3_xlsx.py file.
- channel_config
- created_reports
- to_files
[
### Populating Channel Configs
Populating channel config files tells the Lambda function which devices to prepare reports for and which channels to include data from. **Devices will not be recognized unless they have both a Channel Config file and a To file.**
1. Prepare a file named <devicetype>_channels.json where <devicetype> is the Meshify name for the devicetype.
```touch <devicetype>_channels.json```
2. In the text editor of your choice, develop a JSON **list of objects** that contains properties "meshify_name" and "vanity_name".
```JSON
[
{
"meshify_name": "<channel name in meshify>",
"vanity_name": "<vanity name for report header>"
},
{
"meshify_name": "<another channel name in meshify>",
"vanity_name": "<another vanity name for report header>"
},
]
```
3. Upload this file to the "channel_config" folder in the S3 Bucket.
### Populating To Files
Populating To files tells the Lambda function which devices to prepare reports for and whom to send the reports for each company. **Devices will not be recognized unless they have both a Channel Config file and a To file.**
1. Prepare a file named <devicetype>_to.json where <devicetype> is the Meshify name for the devicetype.
```touch <devicetype>_to.json```
2. In the text editor of your choice, develop a JSON **object** that contains properties of the format below. CompanyA and CompanyB should be replaced by the full name of the company as recorded in Meshify.
```JSON
{
"meshify_name": "yesterday_volume",
"vanity_name": "Yesteday Volume"
},
{
"meshify_name": "volume_flow",
"vanity_name": "Flow Rate"
},
...
]
```
"CompanyA": [
"person@email.com",
"place@email.com"
],
"CompanyB": [
"person@email.com",
"thing@email.com"
]
}
```
## Recipients File
In order to send emails containing the reports, configure a recipients json file named <devicetype>_to.json. The
file should hold a JSON object.
### Example Recipients File
```
# testdevice_to.json
{
"Company 1 Name": [
"email1@company.com",
"email2@company.com"
],
"Company 2 Name": [
"email3@company2.com",
"email4@company2.com"
],
...
}
```
## Running the script
```
usage: reports_xlsx.py [-h] [-s] [-p CONFIG_PATH] [-o OUTPUT_PATH] deviceType
positional arguments:
deviceType Meshify device type
optional arguments:
-h, --help show this help message and exit
-s, --send Send emails to everyone in the _to.json file
-c CONFIG_PATH, --config-path CONFIG_PATH
The folder path that holds the configuration files
-o OUTPUT_PATH, --output-path OUTPUT_PATH
The folder path that holds the output files
```
## Configuring the script to be run via crontab
Open the crontab file with `crontab -e`.
Add the following contents:
```
00 07 * * * /usr/bin/python3 /home/ubuntu/POCloud-Scraper/reports_xlsx.py advvfdipp --send --config-path /home/ubuntu/POCloud-Scraper --output-path /home/ubuntu/POCloud-Scraper/files
01 07 * * * /usr/bin/python3 /home/ubuntu/POCloud-Scraper/reports_xlsx.py ipp --send --config-path /home/ubuntu/POCloud-Scraper --output-path /home/ubuntu/POCloud-Scraper/files
02 07 * * * /usr/bin/python3 /home/ubuntu/POCloud-Scraper/reports_xlsx.py abbflow --send --config-path /home/ubuntu/POCloud-Scraper --output-path /home/ubuntu/POCloud-Scraper/files
```
3. Upload this file to the "to_files" folder in the S3 Bucket.
# POCloud-Scraper
Scrape production data from POCloud to push to accounting servers
### Preparing the Lambda function
## Setup
System variables must be set up for the script to run. Add the following lines to /etc/environment
```
HP_SQL_USER="<yourSQLusername>"
HP_SQL_PASSWORD="<yourSQLpassword>"
HP_SQL_SERVER="<yourSQLserverAddress>"
MESHIFY_USERNAME="<yourMeshifyUsername>"
MESHIFY_PASSWORD="<yourMeshifyPassword>"
```
1. Clone this repository and open it
## Usage
It is useful to run the script and store the output in a log file.
```Shell
git clone https://github.com/Henry-Pump/POCloud-Email-Reports.git
cd POCloud-Email-Reports
```
## Test Mode
The script has a test mode which will only retrieve the data. Test mode will not write date to the database.
2. Setup a Python Virtual environment and activate the environment
To run the script in test mode:
```
python3 henryPetroleumMeshifyAPI.pt True >> output.log
```
```Shell
python3 -m venv env
source env/bin/activate
```
## Normal Mode
In normal mode, the data will be grabbed from the Meshify API and inserted into the Production database.
3. Install necessary python packages in the virtual environment.
To run the script:
```
python3 henryPetroleumMeshifyAPI.pt >> output.log
```
```Shell
pip install requests tzlocal xlsxwriter
```
4. Create a folder for deploying the lambda function
```Shell
mkdir -p deploy```
5. To build the lambda file automatically, allow execution permissions on the build script and execute it. To build manually, examine the [build_lambda.sh](https://github.com/Henry-Pump/POCloud-Email-Reports/blob/master/build_lambda.sh) file and execute commands at your own peril.
```Shell
chmod +x build_lambda.sh
./build_lambda.sh
```
You should now have a file named lambda.zip in the main directory of the repo. This is the file to upload into your Lambda function.
## Creating the Lambda Function in AWS
This section will show you how to configure the Lambda function within AWS. It assumes a strong knowledge of AWS platforms.
1. Sign in to your AWS Console and open the Lambda dashboard.
2. Click "Create function".
3. Select "Author from scratch" and fill in the info
- Name: give your function a name
- Runtime: select Python 3.6
- Role: either choose an existing role with S3, SES, and Lambda permissions or create one.
- Existing role: select the existing or created role name.
4. Click "Create Function".
5. In the function code section, set the following:
- Code entry type: "Upload a .ZIP file"
- Runtime: Python 3.6
- Handler: reports_s3_xlsx.lambda_handler
- Function package: upload the created lambda.zip
6. In Environment Variables, two variables are needed:
- MESHIFY_PASSWORD: your meshify password
- MESHIFY_USERNAME: your meshify username
7. Drag a CloudWatch Events trigger in the Designer to the trigger section of your function.
8. Configure a new CloudWatch event with the schedule expression:
```cron(0 12 * * ? *)```
This will schedule the event to be triggered at 12:00 PM GMT (7:00 AM CST) every day of the week.
9. Save and test your function.
## Contributors
- [Patrick McDonagh](@patrickjmcd) - Owner