Skip to content

Commit 5bc0b44

Browse files
committed
Streamlined code & README to be more similar to the canedge-grafana-backend repository. Also updated log files and dashboard templates
1 parent 5eaf60d commit 5bc0b44

25 files changed

+1196
-266
lines changed

.gitignore

+2-1
Original file line numberDiff line numberDiff line change
@@ -6,4 +6,5 @@ build.bat
66
*inputs_test.py
77
*j1939-engine.dbc
88
*j1939-speed.dbc
9-
*test_new.py
9+
*test_new.py
10+
*env/*
5.3 MB
Binary file not shown.
7.27 MB
Binary file not shown.
16.8 MB
Binary file not shown.

README.md

+76-55
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,10 @@
11
# CANedge InfluxDB Writer - Push CAN Bus Data to InfluxDB
22

3-
This project lets you DBC decode CAN data from your CANedge - and push the data into an InfluxDB database. From here, the data can be displayed via your own customized, open source Grafana dashboard.
3+
This project lets you DBC decode CAN data from your [CANedge](https://www.csselectronics.com/pages/can-bus-hardware-products) CAN/LIN data logger - and push the data into an InfluxDB database. From here, the data can be visualized in your own customized, open source Grafana dashboards.
44

55
For the full step-by-step guide to setting up your dashboard, see the [CANedge intro](https://canlogger.csselectronics.com/canedge-getting-started/log-file-tools/browser-dashboards).
66

7+
![CAN Bus Grafana InfluxDB Dashboard](https://canlogger1000.csselectronics.com/img/CAN-Bus-Telematics-Dashboard-InfluxDB-Grafana.png)
78

89
## Backend vs. Writer
910
We provide two options for integrating your CANedge data with Grafana dashboards:
@@ -22,89 +23,109 @@ For details incl. 'pros & cons', see our [intro to telematics dashboards](https:
2223
- fetch data from hardcoded time period - or automate with dynamic periods
2324
- DBC-decode data and optionally extract specific signals
2425
- optionally resample data to specific frequency
26+
- optionally process multi-frame CAN data (ISO TP), incl. UDS, J1939, NMEA 2000
2527
- write the data to your own InfluxDB time series database
2628
```
27-
---
29+
----
2830

2931
## Installation
30-
We recommend to install Python 3.7 for Windows ([32 bit](https://www.python.org/ftp/python/3.7.9/python-3.7.9.exe)/[64 bit](https://www.python.org/ftp/python/3.7.9/python-3.7.9-amd64.exe)) or [Linux](https://www.python.org/downloads/release/python-379/). Once installed, download and unzip the repository, then navigate to the folder with the `requirements.txt` file.
3132

32-
In your explorer path, write `cmd` and hit enter to open your command prompt.
33+
In this section we detail how to deploy the app on a PC.
3334

34-
Next, enter the below and hit enter to install script dependencies:
35-
36-
``pip install -r requirements.txt``
37-
38-
**Tip:** Watch [this video walkthrough](https://canlogger1000.csselectronics.com/img/dashboard-writer-get-started.mp4) of the above.
35+
Note: We recommend to test the deployment with our sample data as the first step.
3936

37+
----
4038

41-
---
42-
## Test the script
39+
### 1: Deploy the integration locally on your PC
4340

44-
### 1: Test script with sample data
45-
1. Download this repository incl. the J1939 data and demo DBC
46-
2. In `inputs.py` add your InfluxDB details, then run `python main.py` via the command line
41+
#### Install dependencies & write sample data to InfluxDB Cloud
4742

48-
*Note: If you use a free InfluxDB Cloud user, the sample data will be removed after a period (as it is >30 days old).*
43+
- Install Python 3.7 for Windows ([32 bit](https://www.python.org/ftp/python/3.7.9/python-3.7.9.exe)/[64 bit](https://www.python.org/ftp/python/3.7.9/python-3.7.9-amd64.exe)) or [Linux](https://www.python.org/downloads/release/python-379/) (_enable 'Add to PATH'_)
44+
- Download this project as a zip via the green button and unzip it
45+
- Open the folder with the `requirements.txt` file
46+
- Open `inputs.py` with a text editor and add your InfluxDB Cloud details
47+
- Open your [command prompt](https://www.youtube.com/watch?v=bgSSJQolR0E&t=47s) and enter below
4948

50-
### 2: Modify script with your own details
51-
1. Local disk: Add your own data next to the scripts as per the SD structure:
52-
`LOG/<device_ID>/<session>/<split>.MF4`
53-
2. S3 server: Add your S3 server details in `inputs.py` and set `s3 = True`
54-
3. In `inputs.py` update the DBC path list and the device list to match yours
55-
4. Optionally modify the signal filters or resampling frequency
56-
5. On the 1st run, the script will process data starting from `default_start` (you may want to modify this)
49+
##### Windows
50+
```
51+
python -m venv env & env\Scripts\activate & pip install -r requirements.txt
52+
python main.py
53+
```
5754

58-
---
55+
##### Linux
56+
```
57+
python -m venv env && source env/bin/activate && pip install -r requirements.txt
58+
python main.py
59+
```
5960

60-
## Automation
61-
There are multiple ways to automate the script execution.
61+
#### Set up Grafana Cloud
6262

63-
### Use task scheduler
64-
One approach is via periodic execution, triggered e.g. by Windows Task Scheduler or Linux cron jobs. By default, the script is 'dynamic' meaning that it will only process log files that have not yet been added to the InfluxDB database. The script achieves this by fetching the 'most recent' timestamp (across signals) for each device in InfluxDB. The script will then only fetch log files that contain newer data vs. this timestamp.
63+
- In `Configuration/Plugins` install `TrackMap`
64+
- In `Dashboards/Browse` click `Import` and load the `dashboard-template-sample-data.json` from this repo
6565

66-
If no timestamps are found in InfluxDB for a device, `default_start` is used. Same goes if `dynamic = False` is used. If the script is e.g. temporarily unable to connect to InfluxDB, no log files will be listed for processing.
66+
You should now see the sample data visualized in Grafana.
6767

68-
For details on setting up task scheduler, see the CANedge Intro guide for browser dashboards.
68+
Note: To activate your virtual environment use `env\Scripts\activate` (Linux: `source env/bin/activate`)
6969

70-
### Set up AWS Lambda function
71-
Antoher approach is to use event based triggers, e.g. via AWS Lambda functions. We provide a detailed description of setting up AWS Lambda functions in the `aws_lambda_example/` sub folder.
70+
----
7271

73-
---
74-
## Other practical information
72+
### 3: Load your own data & DBC files
7573

76-
### Regarding encrypted log files
77-
If you need to handle encrypted log files, you can provide a passwords dictionary object with similar structure as the `passwords.json` file used in the CANedge MF4 converters. The object can be provided e.g. as below (or via environmental variables):
74+
#### Load from local disk
75+
- Replace the sample `LOG/` folder with your own `LOG/` folder
76+
- Verify that your data is structured as on the CANedge SD card i.e. `[device_id]/[session]/[split].MF4`
77+
- Add your DBC file(s) to the `dbc_files` folder
78+
- Update `devices` and `dbc_paths` in `inputs.py` to reflect your added log and DBC files
79+
- Set `days_offset = None` to ensure your data is written at the correct date
80+
- Verify that your venv is active and run the script via `python main.py`
7881

79-
```
80-
pw = {"default": "password"} # hardcoded
81-
pw = json.load(open("passwords.json")) # from local JSON file
82-
```
82+
#### Load from S3
83+
- Add your DBC file(s) to the `dbc_files` folder
84+
- Update `dbc_paths` in `inputs.py` to reflect your added log and DBC files
85+
- Update `devices` in `inputs.py` to reflect your S3 structure i.e. `["bucket/device_id"]`
86+
- Set `days_offset = None` to ensure your data is written at the correct date
87+
- Update the S3 details in `inputs.py` with your S3 server and set `s3 = True`
8388

84-
### Change timestamps
85-
If you wish to test the script using old data, you can change the timestamps so that the data is 'rebaselined' to today, minus an offset number of days. This is useful e.g. if you want to use the InfluxDB Cloud Starter, which will delete data that is older than 30 days. To rebaseline your data to start today minus 2 days, simply add `days_offset=2` in the `ProcessData` initialization.
89+
Note: You may want to modify other variables like adding signal filters, changing the resampling or modifying the default start date.
8690

87-
### Change verbosity
88-
By default, summary information is printed as part of the processing. You can parse `verbose=False` as an input argument in `list_log_files`, `SetupInflux` and `ProcessData` to avoid this.
91+
#### Import simplified dashboard template
92+
- To get started, import the `dashboard-template-simple.json` to visualize your own data
93+
- After this, you can optionally start customizing your panels as explained in the CANedge Intro
8994

90-
### Delete data from InfluxDB
91-
If you need to delete data in InfluxDB that you e.g. uploaded as part of a test, you can use the `delete_influx(name)` function from the `SetupInflux` class. Call it by parsing the name of the 'measurement' to delete (i.e. the device ID):
95+
----
9296

93-
``influx.delete_influx("958D2219")``
97+
### 4: Move to a production setup
9498

95-
### Multiple channels
96-
If your log files contain data from two CAN channels, you may need to adjust the script in case you have duplicate signal names across both channels. For example, if you're extracting the signal `EngineSpeed` from both channels.
99+
#### Automation
100+
There are multiple ways to automate the script execution.
101+
102+
##### Use task scheduler
103+
One approach is via periodic execution, triggered e.g. by Windows Task Scheduler or Linux cron jobs. By default, the script is 'dynamic' meaning that it will only process log files that have not yet been added to the InfluxDB database. The script achieves this by fetching the 'most recent' timestamp (across signals) for each device in InfluxDB. The script will then only fetch log files that contain newer data vs. this timestamp.
104+
105+
If no timestamps are found in InfluxDB for a device, `default_start` is used. Same goes if `dynamic = False` is used. If the script is e.g. temporarily unable to connect to InfluxDB, no log files will be listed for processing.
106+
107+
For details on setting up task scheduler, see the CANedge Intro guide for browser dashboards.
97108

98-
### Advanced processing (custom signals, transport protocol decoding, ...)
99-
If you need to perform more advanced data processing, you may find useful functions and examples in the api-examples library under `data-processing/`.
109+
#### Set up an AWS Lambda function
110+
Another approach is to use event based triggers, e.g. via AWS Lambda functions. We provide a detailed description of setting up AWS Lambda functions in the `aws_lambda_example/` folder. This is recommended if you're uploading data from multiple CANedge2 devices to an S3 server.
100111

101-
In particular, see the guide in that repository for including transport protocol handling for UDS, J1939 or NMEA 2000 fast packets.
112+
----
113+
114+
## Other information
102115

103-
---
116+
#### Delete data from InfluxDB
117+
If you need to delete data in InfluxDB that you e.g. uploaded as part of a test, you can use the `delete_influx(name)` function from the `SetupInflux` class. Call it by parsing the name of the 'measurement' to delete (i.e. the device ID): `influx.delete_influx("958D2219")`
118+
119+
#### Multiple channels
120+
If your log files contain data from two CAN channels, you may need to adjust the script in case you have duplicate signal names across both channels. For example, if you're extracting the signal `EngineSpeed` from both channels.
104121

105-
### Add InfluxDB tags
122+
#### Multi-frame data (ISO TP)
123+
You can easily process multi-frame data by setting the `tp_type` variable to `"j1939"`, `"uds"` or `"nmea"` and adding the relevant DBC file. For example, you can test this for the sample data by adding the DBC `"dbc_files/nissan_uds.dbc"` and setting `tp_type = "uds"`.
124+
125+
#### Add InfluxDB tags
106126
You can add tags to your data when using InfluxDB. This effectively adds additional dimensions to your data that you can e.g. use to color timeseries based on events or to further segment your queries when visualizing the data. The `utils_db.py` contains a basic example via the `add_signal_tags` functions that you can use as outset for building your own logic.
107127

108-
---
109-
### Regarding InfluxDB and S3 usage costs
128+
----
129+
130+
#### Regarding InfluxDB and S3 usage costs
110131
Note that if you use the paid InfluxDB cloud and a paid S3 server, we recommend that you monitor usage during your tests early on to ensure that no unexpected cost developments occur.
File renamed without changes.

canedge-influxdb-writer/CSS-Electronics-SAE-J1939-DEMO.dbc

-71
This file was deleted.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.

canedge-influxdb-writer/aws_lambda_example/build_layers.py

-80
This file was deleted.

canedge-influxdb-writer/run.bat

-1
This file was deleted.

0 commit comments

Comments
 (0)