-
Notifications
You must be signed in to change notification settings - Fork 779
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
About Vehicle Auto control #793
Comments
Hi, @Malathi15 I'm no expert here. But if you're building an end-to-end solution, and if you want to use ml-agents, maybe you could refer to the codes at for your doubts:
Again I'm no expert here, please correct me if I'm wrong. thanks. |
@ntutangyun Thank you so much for your reply. Thanks |
@Malathi15 could you be more specific on what types of auto control you're looking for? Do you mean that the ego vehicle drives by itself? |
@ntutangyun Thanks |
@Malathi15 I don't understand your problem here. If the host vehicle could drive by itself, then what do you use reinforcement learning for, and how do you plan to train the network? |
@ntutangyun Thanks |
@Malathi15 I'm not aware of such type of control change in LGSVL. Maybe the team could help out @EricBoiseLGSVL |
Hi @ntutangyun, I am Guru from @Malathi15s Team. Answering the following question: aren't you using deep networks to output control signals? Answer: Yes, We are using RL, Based on observations (sensor data), the vehicle need to pick the autonomous actions (controls) like Accelarate, Decelerate,Turn Right / Left, Reverse, Brake during RL training and Reinforcement Learning Training will determine if the action is right or not based on reward points. Can you please let us know, if we can perform such things while training, since we are not doing imitation learning and we expect the system to learn automatically by RL Thanks |
Hi, @guruvishnuvardan I haven't tried ML-agents on LGSVL yet. But it should be possible. you may checkout the scripts under |
I'm also not an expert. I will try to answer some of your questions based on my experience. For RL there is a doc. This may be a bit outdated but should be enough to get you started -- https://www.lgsvlsimulator.com/docs/openai-gym/ |
I think you need an AV stack (e.g. Apollo, Autoware, or your own algorithms) to control the ego car. The simulator just simulates the environment, sensors, cars, etc. There is no AV algorithm in the simulator. |
@Malathi15 @guruvishnuvardan You could extend the VehicleSMI class to follow the HD map. You would need to look at NPCController.cs and FollowLanes code that NPCs use to follow lanes. This could be adapted into a controller for the EGO that would use Python API data for control and decisions. Like @rongguodong and @dr563105 has already stated, this is possible but will require a good amount of work because the simulator is designed to be used with an AV stack. Training is important so please let us know how we can help or if you have questions. |
Thanks @EricBoiseLGSVL, @rongguodong , @dr563105 , @ntutangyun. We will work on the suggestions and get back to you, As we are in the process of understanding LGSVL Simulator, Vehicles, Sensors, OpenAI and RL. Regards |
I desperately need to have an auto-control module for data collection purposes. This is helpful when you are working on ML-based vehicle detection, tracking, and prediction parts. Using random seeds and having the NPC like behaviors like follow_closest_lane, obeying the traffic rules for the Ego vehicles, I can automate the data-collection process, which is highly labor-intensive without this module. I will be appreciated if you have other comments or if you tell me other possible approaches. Best regards, |
@ehsan-ami This issue has multiple ways to automatically control the ego vehicle (PythonAPI, AD stack, edit VehicleSMI and take code from NPCControl logic) What issue are you having implementing these solutions? |
@EricBoiseLGSVL I am looking for automatic control of the ego vehicle for the purpose of data collection; using the random placement of NPC objects in the scene, randomization of the weather condition, and the NPCControl logic, you can create various traffic scenarios and collect many data snippets (e.g., 200 20 sec data snippet in an intersection) in the same environment if our ego vehicle could behave like an NPC vehicle. I couldn't find a way to do this using Python API. On the other hand, using an AD stack is not optimized for running lots of short scenarios. The only reasonable thing that I thought about is editing the simulator's source code. Since I am not familiar with the code structure of the LGSVL, so I appreciate some general sense of the task and general hints. Besides, since this feature is needed by users who need to collect their own dataset using LGSVL, which I think there must be a reasonable number of users, my suggestion and request is that the NPCControl logic to be added for the ego vehicle in the future releases. Many thanks, |
PythonAPI is setting the path for the EGO and will not work if you want logic for pathing. |
Hi,
I have tried to create an Autonomous vehicle software based on Deep Reinforcement Learning. I am using LGSVL Simulator for a research purpose.
System specification:
OS: Ubuntu 18.04
Memory: 15.7 GiB
Processor:AMD® Ryzen 7 3800x 8-core processor × 16
I have downloaded LGSVL Simulator from this github page https://github.com/lgsvl/simulator also I have referred to this documentation https://www.lgsvlsimulator.com/docs/build-instructions/.
After Build LGSVL Simulator I need to run it in auto control mode, But the Simulator runs only in a manual control mode. Please check the attached file (json script I have used for vehicle)
Jaguar2015XE (Autoware).txt
Can you please clarify my following doubts:
How manual controls can be used in Reinforcement Learning training? For an autonomous vehicle?
Should the Autonomous Host Vehicle control be written as a Code?
Do we need to write code to feed in sensor data first to train and make the car to drive itself in Autonomous Driving Mode?
Thanks in Advance
Malathi K
The text was updated successfully, but these errors were encountered: