Skip to content

dwa_algorithm#36

Open
ss48 wants to merge 45 commits intomotion-planning:developfrom
ss48:develop
Open

dwa_algorithm#36
ss48 wants to merge 45 commits intomotion-planning:developfrom
ss48:develop

Conversation

@ss48
Copy link
Copy Markdown

@ss48 ss48 commented Aug 31, 2024

No description provided.

@ss48
Copy link
Copy Markdown
Author

ss48 commented Aug 31, 2024

Adding machine learning (ML) to improve the performance of the robot's routing and scheduling via using the information collected (such as path length, optimised path, computation time, etc.) to train a model that can predict better paths or tune the parameters for optimization. The basis implementation approach to incorporate ML are shortest path, least computation time.

State: The robot's current position, distance to the goal, distance to obstacles, etc.
Action: Possible movements (e.g., velocity, yaw rate).
Reward: A function that rewards shorter paths, lower computation time, avoiding obstacles, etc.

We used the algorithm - Deep Q-Networks (DQN) to create an environment class that encapsulates the state, actions, and rewards. This environment interacts with your RRT and DWA algorithms.
The DQN will be responsible for learning the optimal policy based on the robot's state (current position, distance to the goal, distance to obstacles, etc.), actions (possible movements), and rewards (based on path length, computation time, avoiding obstacles, etc.).
Distance to the goal: Encourage the drone to get closer to the goal.
Obstacle clearance: Penalize the drone for getting too close to obstacles.
Path smoothness: Encourage smoother paths by penalizing sharp turns.
Energy usage: Penalize higher energy consumption to encourage efficient paths.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant