-
Notifications
You must be signed in to change notification settings - Fork 102
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
EXICodec encoding time inconsistent #309
Comments
Hi, we have experienced a drop in performance with embedded devices in comparison to general-purpose CPUs. The java engine is quite heavy for embedded applications and is not suitable to run the codec. |
1 similar comment
Hi, we have experienced a drop in performance with embedded devices in comparison to general-purpose CPUs. The java engine is quite heavy for embedded applications and is not suitable to run the codec. |
Then may I ask why the Supported app protocol encodes in such a swift time yet the other messages take such an extremely long time? It just doesn't make much sense to me why that message is encoded so fast while the others aren't. Thank you. |
One reason could be because of the schema used to generate the EXI. The schema used for supportedAppProtocol is the V2G_CI_AppProtocol.xsd which is the simplest of the lot and is that single file. So a lot less grammar to traverse. Also, supportedProtocolRes afaik is the shortest EXI stream of all EXI messages in 15118-2. |
Can I ask you if you tried to set |
I will try, thank you! I will post results Cheers. Apologies for the late response. |
Alright, disabling both JSON and EXI logging has improved performance quite a bit. I have additionally cranked up the OS niceness for the EVCC and SECC and enabled TCP_NODELAY for the SECC socket. It still however times out occasionally in the beginning. Performance has greatly improved though. One thing I have also noted, the more cycles that run sequentially, the more the performance of the codec appears to improve. I can for example set the SDP retry cycles of the EVCC to around 50. and the longer it runs the more the performance of the codec appears to improve with the occasional outlier. Perhaps I will also look into maybe improving the performance of the codec on the system myself by looking for optimizations, although I doubt I will find many. If you have any other ideas for improving the performance I would love to hear them! |
I do not believe I know quite enough about the library, however I feel it is still worth asking. Could it be that the performance of supported app protocol is fast because it doesnt have to import very big XML files. The path for a regular message goes something like this.
Is this import chain called every time a message needs to be constructed? Or is it built once and cached? Reconstructing a message by hand "ex. SessionSetupReq" results in a decently small fille. I may be wrong about this, or maybe it is optimized away later but I am just trying to understand if there is maybe a better way to do this, like defining the message attributes in individual files and calling them when required. Again, I may be wrong about this so please correct me if I am wrong.
The difference in length of message shouldnt matter that much |
Yes, the grammar for AppProtocol is way smaller compared to the others. The grammar is loaded only once. So I would expect the subsequent calls to be quicker.
Correct - Length of the input does have an impact, but the size of the grammar has the bigger impact. As a quick test, could you try the following please - construct an standalone SessionSetupReq(or another message that requires V2GCI_Msg_Def with random values) before running the actual session. Would like to see if this helps with your case. |
I stripped away all message info related to other messages except for SessionSetup in the messages, however the session is still somehow running the simulated session in its entirety which is impressive. I speculate that the .jar is loading these files statically internally and any changes I make are not doing anything. Could this be true? |
Using the ISO15118 library I have encountered an issue I cannot quite seem to resolve.
When I am running the software stack on an Ubuntu VM the timings for encoding and decoding appear very fast and smooth. However now using it on two embedded linux devices the performance has plummeted. I cannot quite figure out why.
I will provide some details here :
Output of the program. I have added several "time to encode" stamps these are in seconds. I am simply running both of the simulators. (EVCC and SECC). I this case it is the SECC.
The system it is running on should have the capacity in processing power to handle this.
Small note : I have modified the launch_gateway function to supply py4j with a set path JDK on an SD card because of storage space restrictions
Has anyone expierenced similar issues to mine? If any more info is required I will supply it.
The text was updated successfully, but these errors were encountered: