加勒比久久综合,国产精品伦一区二区,66精品视频在线观看,一区二区电影

合肥生活安徽新聞合肥交通合肥房產(chǎn)生活服務(wù)合肥教育合肥招聘合肥旅游文化藝術(shù)合肥美食合肥地圖合肥社保合肥醫(yī)院企業(yè)服務(wù)合肥法律

COMP9414代做、代寫Python程序設(shè)計(jì)

時(shí)間:2024-07-21  來源:合肥網(wǎng)hfw.cc  作者:hfw.cc 我要糾錯(cuò)



COMP9414 24T2
Artificial Intelligence
Assignment 2 - Reinforcement Learning
Due: Week 9, Wednesday, 24 July 2024, 11:55 PM.
1 Problem context
Taxi Navigation with Reinforcement Learning: In this assignment,
you are asked to implement Q-learning and SARSA methods for a taxi nav-
igation problem. To run your experiments and test your code, you should
make use of the Gym library1, an open-source Python library for developing
and comparing reinforcement learning algorithms. You can install Gym on
your computer simply by using the following command in your command
prompt:
pip i n s t a l l gym
In the taxi navigation problem, there are four designated locations in the
grid world indicated by R(ed), G(reen), Y(ellow), and B(lue). When the
episode starts, one taxi starts off at a random square and the passenger is
at a random location (one of the four specified locations). The taxi drives
to the passenger’s location, picks up the passenger, drives to the passenger’s
destination (another one of the four specified locations), and then drops off
the passenger. Once the passenger is dropped off, the episode ends. To show
the taxi grid world environment, you can use the following code:
1https://www.gymlibrary.dev/environments/toy text/taxi/
1
env = gym .make(”Taxi?v3 ” , render mode=”ans i ” ) . env
s t a t e = env . r e s e t ( )
rendered env = env . render ( )
p r i n t ( rendered env )
In order to render the environment, there are three modes known as
“human”, “rgb array, and “ansi”. The “human” mode visualizes the envi-
ronment in a way suitable for human viewing, and the output is a graphical
window that displays the current state of the environment (see Fig. 1). The
“rgb array” mode provides the environment’s state as an RGB image, and
the output is a numpy array representing the RGB image of the environment.
The “ansi” mode provides a text-based representation of the environment’s
state, and the output is a string that represents the current state of the
environment using ASCII characters (see Fig. 2).
Figure 1: “human” mode presentation for the taxi navigation problem in
Gym library.
You are free to choose the presentation mode between “human” and
“ansi”, but for simplicity, we recommend “ansi” mode. Based on the given
description, there are six discrete deterministic actions that are presented in
Table 1.
For this assignment, you need to implement the Q-learning and SARSA
algorithms for the taxi navigation environment. The main objective for this
assignment is for the agent (taxi) to learn how to navigate the gird-world
and drive the passenger with the minimum possible steps. To accomplish
the learning task, you should empirically determine hyperparameters, e.g.,
the learning rate α, exploration parameters (such as ? or T ), and discount
factor γ for your algorithm. Your agent should be penalized -1 per step it
2
Figure 2: “ansi” mode presentation for the taxi navigation problem in Gym
library. Gold represents the taxi location, blue is the pickup location, and
purple is the drop-off location.
Table 1: Six possible actions in the taxi navigation environment.
Action Number of the action
Move South 0
Move North 1
Move East 2
Move West 3
Pickup Passenger 4
Drop off Passenger 5
takes, receive a +20 reward for delivering the passenger, and incur a -10
penalty for executing “pickup” and “drop-off” actions illegally. You should
try different exploration parameters to find the best value for exploration
and exploitation balance.
As an outcome, you should plot the accumulated reward per episode and
the number of steps taken by the agent in each episode for at least 1000
learning episodes for both the Q-learning and SARSA algorithms. Examples
of these two plots are shown in Figures 3–6. Please note that the provided
plots are just examples and, therefore, your plots will not be exactly like the
provided ones, as the learning parameters will differ for your algorithm.
After training your algorithm, you should save your Q-values. Based on
your saved Q-table, your algorithms will be tested on at least 100 random
grid-world scenarios with the same characteristics as the taxi environment for
both the Q-learning and SARSA algorithms using the greedy action selection
3
Figure 3: Q-learning reward. Figure 4: Q-learning steps.
Figure 5: SARSA reward. Figure 6: SARSA steps.
method. Therefore, your Q-table will not be updated during testing for the
new steps.
Your code should be able to visualize the trained agent for both the Q-
learning and SARSA algorithms. This means you should render the “Taxi-
v3” environment (you can use the “ansi” mode) and run your trained agent
from a random position. You should present the steps your agent is taking
and how the reward changes from one state to another. An example of the
visualized agent is shown in Fig. 7, where only the first six steps of the taxi
are displayed.
2 Testing and discussing your code
As part of the assignment evaluation, your code will be tested by tutors
along with you in a discussion carried out in the tutorial session in week 10.
The assignment has a total of 25 marks. The discussion is mandatory and,
therefore, we will not mark any assignment not discussed with tutors.
Before your discussion session, you should prepare the necessary code for
this purpose by loading your Q-table and the “Taxi-v3” environment. You
should be able to calculate the average number of steps per episode and the
4
Figure 7: The first six steps of a trained agent (taxi) based on Q-learning
algorithm.
average accumulated reward (for a maximum of 100 steps for each episode)
for the test episodes (using the greedy action selection method).
You are expected to propose and build your algorithms for the taxi nav-
igation task. You will receive marks for each of these subsections as shown
in Table 2. Except for what has been mentioned in the previous section, it is
fine if you want to include any other outcome to highlight particular aspects
when testing and discussing your code with your tutor.
For both Q-learning and SARSA algorithms, your tutor will consider the
average accumulated reward and the average taken steps for the test episodes
in the environment for a maximum of 100 steps for each episode. For your Q-
learning algorithm, the agent should perform at most 14 steps per episode on
average and obtain a minimum of 7 average accumulated reward. Numbers
worse than that will result in a score of 0 marks for that specific section.
For your SARSA algorithm, the agent should perform at most 15 steps per
episode on average and obtain a minimum of 5 average accumulated reward.
Numbers worse than that will result in a score of 0 marks for that specific
section.
Finally, you will receive 1 mark for code readability for each task, and
your tutor will also give you a maximum of 5 marks for each task depending
on the level of code understanding as follows: 5. Outstanding, 4. Great,
3. Fair, 2. Low, 1. Deficient, 0. No answer.
5
Table 2: Marks for each task.
Task Marks
Results obtained from agent learning
Accumulated rewards and steps per episode plots for Q-learning
algorithm.
2 marks
Accumulated rewards and steps per episode plots for SARSA
algorithm.
2 marks
Results obtained from testing the trained agent
Average accumulated rewards and average steps per episode for
Q-learning algorithm.
2.5 marks
Average accumulated rewards and average steps per episode for
SARSA algorithm.
2.5 marks
Visualizing the trained agent for Q-learning algorithm. 2 marks
Visualizing the trained agent for SARSA algorithm. 2 marks
Code understanding and discussion
Code readability for Q-learning algorithm 1 mark
Code readability for SARSA algorithm 1 mark
Code understanding and discussion for Q-learning algorithm 5 mark
Code understanding and discussion for SARSA algorithm 5 mark
Total marks 25 marks
3 Submitting your assignment
The assignment must be done individually. You must submit your assignment
solution by Moodle. This will consist of a single .zip file, including three
files, the .ipynb Jupyter code, and your saved Q-tables for Q-learning and
SARSA (you can choose the format for the Q-tables). Remember your files
with your Q-tables will be called during your discussion session to run the
test episodes. Therefore, you should also provide a script in your Python
code at submission to perform these tests. Additionally, your code should
include short text descriptions to help markers better understand your code.
Please be mindful that providing clean and easy-to-read code is a part of
your assignment.
Please indicate your full name and your zID at the top of the file as a
comment. You can submit as many times as you like before the deadline –
later submissions overwrite earlier ones. After submitting your file a good
6
practice is to take a screenshot of it for future reference.
Late submission penalty: UNSW has a standard late submission
penalty of 5% per day from your mark, capped at five days from the as-
sessment deadline, after that students cannot submit the assignment.
4 Deadline and questions
Deadline: Week 9, Wednesday 24 of July 2024, 11:55pm. Please use the
forum on Moodle to ask questions related to the project. We will prioritise
questions asked in the forum. However, you should not share your code to
avoid making it public and possible plagiarism. If that’s the case, use the
course email cs9414@cse.unsw.edu.au as alternative.
Although we try to answer questions as quickly as possible, we might take
up to 1 or 2 business days to reply, therefore, last-moment questions might
not be answered timely.
For any questions regarding the discussion sessions, please contact directly
your tutor. You can have access to your tutor email address through Table
3.
5 Plagiarism policy
Your program must be entirely your own work. Plagiarism detection software
might be used to compare submissions pairwise (including submissions for
any similar projects from previous years) and serious penalties will be applied,
particularly in the case of repeat offences.
Do not copy from others. Do not allow anyone to see your code.
Please refer to the UNSW Policy on Academic Honesty and Plagiarism if you
require further clarification on this matter.

請加QQ:99515681  郵箱:99515681@qq.com   WX:codinghelp





 

掃一掃在手機(jī)打開當(dāng)前頁
  • 上一篇:COMP9021代做、代寫python設(shè)計(jì)程序
  • 下一篇:COMP6008代做、代寫C/C++,Java程序語言
  • 無相關(guān)信息
    合肥生活資訊

    合肥圖文信息
    2025年10月份更新拼多多改銷助手小象助手多多出評軟件
    2025年10月份更新拼多多改銷助手小象助手多
    有限元分析 CAE仿真分析服務(wù)-企業(yè)/產(chǎn)品研發(fā)/客戶要求/設(shè)計(jì)優(yōu)化
    有限元分析 CAE仿真分析服務(wù)-企業(yè)/產(chǎn)品研發(fā)
    急尋熱仿真分析?代做熱仿真服務(wù)+熱設(shè)計(jì)優(yōu)化
    急尋熱仿真分析?代做熱仿真服務(wù)+熱設(shè)計(jì)優(yōu)化
    出評 開團(tuán)工具
    出評 開團(tuán)工具
    挖掘機(jī)濾芯提升發(fā)動(dòng)機(jī)性能
    挖掘機(jī)濾芯提升發(fā)動(dòng)機(jī)性能
    海信羅馬假日洗衣機(jī)亮相AWE  復(fù)古美學(xué)與現(xiàn)代科技完美結(jié)合
    海信羅馬假日洗衣機(jī)亮相AWE 復(fù)古美學(xué)與現(xiàn)代
    合肥機(jī)場巴士4號線
    合肥機(jī)場巴士4號線
    合肥機(jī)場巴士3號線
    合肥機(jī)場巴士3號線
  • 短信驗(yàn)證碼 目錄網(wǎng) 排行網(wǎng)

    關(guān)于我們 | 打賞支持 | 廣告服務(wù) | 聯(lián)系我們 | 網(wǎng)站地圖 | 免責(zé)聲明 | 幫助中心 | 友情鏈接 |

    Copyright © 2025 hfw.cc Inc. All Rights Reserved. 合肥網(wǎng) 版權(quán)所有
    ICP備06013414號-3 公安備 42010502001045

    国产91久久精品一区二区| 日本h片久久| 精品网站aaa| 一区二区中文| 日韩久久一区二区三区| 天天操夜夜操国产精品| 日韩精品一区二区三区中文在线 | 免费观看久久久4p| 久久精品青草| 久久av综合| 麻豆一区二区三区| 天堂中文av在线资源库| 999亚洲国产精| 91精品啪在线观看国产18| 日本三级亚洲精品| 亚洲最新色图| 欧美成人xxxx| 91亚洲自偷观看高清| 妖精视频成人观看www| 精品视频自拍| 9国产精品午夜| 偷窥自拍亚洲色图精选| 国产精品大片免费观看| 免费成人毛片| 日本精品不卡| 97色伦图片97综合影院| 老牛国产精品一区的观看方式| 亚洲午夜极品| 99久久婷婷| 人体久久天天| 久久黄色影视| 成人性生交大片免费看96| 久久久国产精品入口麻豆| 国产亚洲高清一区| 97精品资源在线观看| 日本美女视频一区二区| 日韩高清不卡一区二区三区| 午夜裸体女人视频网站在线观看| 国产精品久久久乱弄| 亚洲综合电影一区二区三区| 99国产精品| 亚洲精品在线观看91| re久久精品视频| 不卡一区综合视频| 免费成人av| **女人18毛片一区二区| 1024成人| 黄色在线一区| 在线亚洲伦理| 玖玖国产精品视频| 免费在线成人网| 欧美激情理论| 亚洲男人av| 欧美日韩尤物久久| 久久精品国产精品青草| 一区二区精品| 欧美亚洲一级| 日本vs亚洲vs韩国一区三区二区| 蓝色福利精品导航| 亚洲我射av| 久久九九精品视频| 精品久久久久久久久久久下田| 玖玖玖免费嫩草在线影院一区| 99久久综合| 欧洲福利电影| 日本久久综合| 亚洲国产aⅴ精品一区二区三区| 日韩一区精品| 日本不卡视频在线观看| 99精品美女视频在线观看热舞| 亚洲精品蜜桃乱晃| 国产精品视频3p| 国产精品88久久久久久| 国产精品丝袜xxxxxxx| 激情国产在线| 美女视频一区二区| 国产va免费精品观看精品视频| 午夜视频一区二区在线观看| 久久久亚洲一区| 国产精品毛片一区二区三区| a国产在线视频| 美女一区二区视频| 亚洲肉体裸体xxxx137| y111111国产精品久久久| 精品日本12videosex| 日韩在线一区二区| 快播电影网址老女人久久| 日本一区中文字幕| 日韩成人一级| 精品在线99| 成人一区二区| 日韩国产在线观看一区| 欧美猛男男男激情videos| 成人在线视频免费观看| 99视频精品免费观看| 日韩欧美不卡| 国产精品色婷婷在线观看| 欧美亚洲tv| 男女精品视频| 久久精品首页| 欧美黄色精品| 毛片在线网站| 麻豆国产精品官网| 日韩一区免费| 亚欧美无遮挡hd高清在线视频| 国产精品久久久久久影院8一贰佰| 成人午夜一级| 日韩一区二区三区精品| 好吊视频一区二区三区四区| 日韩欧美2区| 日本高清精品| 国产亚洲永久域名| 国产精品久久久久久久免费软件| 日韩av综合| 亚洲欧美久久久| 日日夜夜免费精品视频| 日韩精品视频在线看| 香蕉久久久久久久av网站| 日本午夜免费一区二区| 精品国产麻豆| 美国av一区二区 | 午夜激情久久| 亚洲国产尤物| 一区中文字幕| 日本久久精品| 韩国三级成人在线| 黄色av成人| 麻豆一区二区在线| 国产真实久久| 一区二区福利| 国语产色综合| 久久精品理论片| 欧美一级二级三级视频| 日韩国产欧美| 国语一区二区三区| 性欧美超级视频| 亚洲视频国产| 深夜在线视频| 在线天堂资源www在线污| 日韩精彩视频在线观看| 黄色精品免费| 欧美日韩 国产精品| 伊人久久综合影院| 亚洲国产高清一区| 99久久久久| 一区二区三区精品视频在线观看| 蜜桃久久久久| 日韩久久一区| 99视频精品全部免费在线视频| 四虎精品一区二区免费| 日韩精品免费一区二区三区| 国产成人精选| 激情欧美一区| 国产精品大片免费观看| 99re国产精品| 亚洲人成网亚洲欧洲无码| 色婷婷亚洲mv天堂mv在影片| 日本中文字幕在线一区| 中文字幕在线视频久| 96sao在线精品免费视频| 天天综合网站| 久久一区二区三区喷水| 国产日韩一区| 午夜亚洲伦理| 日韩视频在线直播| 婷婷综合六月| 亚洲v天堂v手机在线| 日韩超碰人人爽人人做人人添| 麻豆久久精品| 日韩极品在线观看| 日韩免费av| 99久久99热这里只有精品| 欧美aa在线视频| 99re国产精品| 日韩视频一区二区三区四区| 婷婷午夜社区一区| 狠久久av成人天堂| 日韩高清在线免费观看| 视频一区在线免费看| 国产一区激情| 亚洲国产合集| 亚洲国产天堂| 亚洲欧美网站| 理论片一区二区在线| 在线观看欧美| 91成人抖音| 噜噜噜91成人网| 精品一区二区男人吃奶| 可以看av的网站久久看| 妞干网免费在线视频| 亚洲精品888| 中文无码日韩欧| 国产精品草草| 桃色一区二区| 老司机午夜免费精品视频 | 欧美日韩女优| 亚洲综合另类| 99免费精品| 亚洲免费毛片| 国产精品mm|