加勒比久久综合,国产精品伦一区二区,66精品视频在线观看,一区二区电影

合肥生活安徽新聞合肥交通合肥房產生活服務合肥教育合肥招聘合肥旅游文化藝術合肥美食合肥地圖合肥社保合肥醫院企業服務合肥法律

代做 COMP9417、Python 語言程序代寫

時間:2024-03-25  來源:合肥網hfw.cc  作者:hfw.cc 我要糾錯



COMP9417 - Machine Learning Homework 2: Numerical Implementation of Logistic Regression
Introduction In homework 1, we considered Gradient Descent (and coordinate descent) for minimizing a regularized loss function. In this homework, we consider an alternative method known as Newton’s algorithm. We will first run Newton’s algorithm on a simple toy problem, and then implement it from scratch on a real data classification problem. We also look at the dual version of logistic regression.
Points Allocation There are a total of 30 marks.
• Question 1 a): 1 mark
• Question 1 b): 2 mark
• Question 2 a): 3 marks
• Question 2 b): 3 marks
• Question 2 c): 2 marks
• Question 2 d): 4 mark
• Question 2 e): 4 marks
• Question 2 f): 2 marks
• Question 2 g): 4 mark
• Question 2 h): 3 marks
• Question 2 i): 2 marks
What to Submit
• A single PDF file which contains solutions to each question. For each question, provide your solution in the form of text and requested plots. For some questions you will be requested to provide screen shots of code used to generate your answer — only include these when they are explicitly asked for.
• .py file(s) containing all code you used for the project, which should be provided in a separate .zip file. This code must match the code provided in the report.
• You may be deducted points for not following these instructions.
• You may be deducted points for poorly presented/formatted work. Please be neat and make your solutions clear. Start each question on a new page if necessary.
1

• You cannot submit a Jupyter notebook; this will receive a mark of zero. This does not stop you from developing your code in a notebook and then copying it into a .py file though, or using a tool such as nbconvert or similar.
• We will set up a Moodle forum for questions about this homework. Please read the existing questions before posting new questions. Please do some basic research online before posting questions. Please only post clarification questions. Any questions deemed to be fishing for answers will be ignored and/or deleted.
• Please check Moodle announcements for updates to this spec. It is your responsibility to check for announcements about the spec.
• Please complete your homework on your own, do not discuss your solution with other people in the course. General discussion of the problems is fine, but you must write out your own solution and acknowledge if you discussed any of the problems in your submission (including their name(s) and zID).
• As usual, we monitor all online forums such as Chegg, StackExchange, etc. Posting homework ques- tions on these site is equivalent to plagiarism and will result in a case of academic misconduct.
When and Where to Submit
• Due date: Week 7, Monday March 25th, 2024 by 5pm. Please note that the forum will not be actively monitored on weekends.
• Late submissions will incur a penalty of 5% per day from the maximum achievable grade. For ex- ample, if you achieve a grade of 80/100 but you submitted 3 days late, then your final grade will be 80 − 3 × 5 = 65. Submissions that are more than 5 days late will receive a mark of zero.
• Submission must be done through Moodle, no exceptions.
Page 2

Question 1. Introduction to Newton’s Method
Note: throughout this question do not use any existing implementations of any of the algorithms discussed unless explicitly asked to in the question. Using existing implementations can result in a grade of zero for the entire question. In homework 1 we studied gradient descent (GD), which is usually referred to as a first order method. Here, we study an alternative algorithm known as Newton’s algorithm, which is generally referred to as a second order method. Roughly speaking, a second order method makes use of both first and second derivatives. Generally, second order methods are much more accurate than first order ones. Given a twice differentiable function g : R → R, Newton’s method generates a sequence {x(k)} iteratively according to the following update rule:
x(k+1) = x(k) − g′(x(k)) , k = 0,1,2,..., (1) g′′(x(k))
For example, consider the function g(x) = 12 x2 − sin(x) with initial guess x(0) = 0. Then g′(x) = x − cos(x), and g′′(x) = 1 + sin(x),
and so we have the following iterations:
x(1) = x(0) − x(0) − cos(x0) = 0 − 0 − cos(0) = 1 1 + sin(x(0)) 1 + sin(0)
x(2) = x(1) − x(1) − cos(x1) = 1 − 1 − cos(1) = 0.750363867840244 1 + sin(x(1)) 1 + sin(1)
x(3) = 0.**91128**911362 .
and this continues until we terminate the algorithm (as a quick exercise for your own benefit, code this up, plot the function and each of the iterates). We note here that in practice, we often use a different update called the dampened Newton method, defined by:
x(k+1) =x(k) −αg′(xk), k=0,1,2,.... (2) g′′(xk)
Here, as in the case of GD, the step size α has the effect of ‘dampening’ the update. Consider now the twice differentiable function f : Rn → R. The Newton steps in this case are now:
x(k+1) =x(k) −(H(x(k)))−1∇f(x(k)), k=0,1,2,..., (3)
where H(x) = ∇2f(x) is the Hessian of f. Heuristically, this formula generalized equation (1) to func- tions with vector inputs since the gradient is the analog of the first derivative, and the Hessian is the analog of the second derivative.
(a) Consider the function f : R2 → R defined by f(x,y)=100(y−x2)2 +(1−x)2.
Create a 3D plot of the function using mplot3d (see lab0 for example). Use a range of [−5, 5] for both x and y axes. Further, compute the gradient and Hessian of f . what to submit: A single plot, the code used to generate the plot, the gradient and Hessian calculated along with all working. Add a copy of the code to solutions.py
Page 3
      
(b) Using NumPy only, implement the (undampened) Newton algorithm to find the minimizer of the function in the previous part, using an initial guess of x(0) = (−1.2, 1)T . Terminate the algorithm when 􏰀􏰀∇f(x(k))􏰀􏰀2 ≤ 10−6. Report the values of x(k) for k = 0, 1, . . . , K where K is your final iteration. what to submit: your iterations, and a screen shot of your code. Add a copy of the code to solutions.py
Question 2. Solving Logistic Regression Numerically
Note: throughout this question do not use any existing implementations of any of the algorithms discussed unless explicitly asked to do so in the question. Using existing implementations can result in a grade of zero for the entire question. In this question we will compare gradient descent and Newton’s algorithm for solving the logistic regression problem. Recall that in logistic regresion, our goal is to minimize the log-loss, also referred to as the cross entropy loss. Consider an intercept β0 ∈ R, parametervectorβ=(β1,...,βm)T ∈Rm,targetyi ∈{0,1}andinputvectorxi =(xi1,xi2,...,xip)T. Consider also the feature map φ : Rp → Rm and corresponding feature vector φi = (φi1 , φi2 , . . . , φim )T where φi = φ(xi). Define the (l2-regularized) log-loss function:
12λ􏰈n􏰃􏰁1⭺**; 􏰁1⭺**;􏰄
L(β0, β) = 2 ∥β∥2 + n
where σ(z) = (1+e−z)−1 is the logistic sigmoid, and λ is a hyper-parameter that controls the amount of regularization. Note that λ here is applied to the data-fit term as opposed to the penalty term directly, but all that changes is that larger λ now means more emphasis on data-fitting and less on regularization. Note also that you are provided with an implementation of this loss in helper.py.
(a) Show that the gradient descent update (with step size α) for γ = [β0, βT ]T takes the form
  γ(k)=γ(k−1)−α×
􏰅 − λ 1T (y − σ(β(k−1)1 + Φβ(k−1))) 􏰆 n n 0 n ,
β(k−1) − λ ΦT (y − σ(β(k−1)1 + Φβ(k−1))) n0n
i=1
yi ln σ(β0 + βT φi) + (1 − yi) ln 1 − σ(β0 + βT φi) ,
where the sigmoid σ(·) is applied elementwise, 1n is the n-dimensional vector of ones and
 φ T1 ?**7;
 φ T2 ?**8; Φ= . ?**8;∈R
what to submit: your working out.
(b) In what follows, we refer to the version of the problem based on L(β0,β) as the Primal version. Consider the re-parameterization: β = 􏰇nj=1 θjφ(xj). Show that the loss can now be written as:
1Tλ􏰈n􏰃􏰁1⭺**; 􏰁1⭺**;􏰄
L(θ0,θ)=2θ Aθ+n
i=1
n × m
.?**9; .?**9;
,
φTn yn
yiln σ(θ0+θTbxi) +(1−yi)ln 1−σ(θ0+θTbxi) .
whereθ0 ∈R,θ=(θ1,...,θn)T ∈Rn,A∈Rn×nandfori=1,...,n,bxi ∈Rn.Werefertothis version of the problem as the Dual version. Write down exact expressions for A and bxi in terms of k(xi,xj) := ⟨φ(xi),φ(xj)⟩ for i,j = 1,...,n. Further, for the dual parameter η = [θ0,θT ]T , show that the gradient descent update is given by:
 y 1 ?**7;
 y 2 ?**8; n y= . ?**8;∈R .
  η(k)=η(k−1)−α×
􏰅
− λ 1T (y − σ(θ(k−1)1 + Aθ(k−1))) n n 0 n
Aθ(k−1) − λ A(y − σ(θ(k−1)1 + Aθ(k−1))) n0n
Page 4
􏰆
,

If m ≫ n, what is the advantage of the dual representation relative to the primal one which just makes use of the feature maps φ directly? what to submit: your working along with some commentary.
(c) We will now compare the performance of (primal/dual) GD and the Newton algorithm on a real dataset using the derived updates in the previous parts. To do this, we will work with the songs.csv dataset. The data contains information about various songs, and also contains a class variable outlining the genre of the song. If you are interested, you can read more about the data here, though a deep understanding of each of the features will not be crucial for the purposes of this assessment. Load in the data and preform the follwing preprocessing:
(I) Remove the following features: ”Artist Name”, ”Track Name”, ”key”, ”mode”, ”time signature”, ”instrumentalness”
(II) The current dataset has 10 classes, but logistic regression in the form we have described it here only works for binary classification. We will restrict the data to classes 5 (hiphop) and 9 (pop). After removing the other classes, re-code the variables so that the target variable is y = 1 for hiphop and y = 0 for pop.
(III) Remove any remaining rows that have missing values for any of the features. Your remaining dataset should have a total of 3886 rows.
(IV) Use the sklearn.model selection.train test split function to split your data into X train, X test, Y train and Y test. Use a test size of 0.3 and a random state of 23 for reproducibility.
(V) Fit the sklearn.preprocessing.MinMaxScaler to the resulting training data, and then use this object to scale both your train and test datasets so that the range of the data is in (0, 0.1).
(VI) Print out the first and last row of X train, X test, y train, y test (but only the first 3 columns of X train, X test).
What to submit: the print out of the rows requested in (VI). A copy of your code in solutions.py
(d) For the primal problem, we will use the feature map that generates all polynomial features up to and including order 3, that is:
φ(x) = [1,x1,...,xp,x31,...,x3p,x1x2x3,...,xp−1xp−2xp−1].
In python, we can generate such features using sklearn.preprocessing.PolynomialFeatures.
For example, consider the following code snippet:
1 2 3 4 5
1if you need a sanity check here, the best thing to do is use sklearn to fit logistic regression models. This should give you an idea of what kind of loss your implementation should be achieving (if your implementation does as well or better, then you are on the right track)
Page 5
 from sklearn.preprocessing import PolynomialFeatures
poly = PolynomialFeatures(3)
 X = np.arange(6).reshape(3, 2)
poly.fit_transform(X)
 Transform the data appropriately, then run gradient descent with α = 0.4 on the training dataset for 50 epochs and λ = 0.5. In your implementation, initalize β(0) = 0, β(0) = 0 , where 0 is the
0pp p-dimensional vector of zeroes. Report your final train and test losses, as well as plots of training loss at each iteration. 1 what to submit: one plot of the train losses. Report your train and test losses, and
a screen shot of any code used in this section, as well as a copy of your code in solutions.py.
 
(e) Fortheprimalproblem,runthedampenedNewtonalgorithmonthetrainingdatasetfor50epochs and λ = 0.5. Use the same initialization for β0,β as in the previous question. Report your final train and test losses, as well as plots of your train loss for both GD and Newton algorithms for all iterations (use labels/legends to make your plot easy to read). In your implementation, you may use that the Hessian for the primal problem is given by:
λ1TDΦ 􏰄 n n ,
where D is the n × n diagonal matrix with i-th element σ(di)(1 − σ(di)) and di = β0 + φTi β. what to submit: one plot of the train losses. Report your train and test losses, and a screen shot of any code used in this section, as well as a copy of your code in solutions.py.
(f) For the feature map used in the previous two questions, what is the correspongdin kernel k(x, y) that can be used to give the corresponding dual problem? what to submit: the chosen kernel.
H(β,β)= n n n
􏰃λ1TD1
0 λ ΦT D1n nn
(g) Implement Gradient Descent for the dual problem using the kernel found in the previous part. Use the same parameter values as before (although now θ(0) = 0 and θ(0) = 0 ). Report your final
0n
training loss, as well as plots of your train loss for GD for all iterations. what to submit: a plot of the
train losses and report your final train loss, and a screen shot of any code used in this section, as well as a copy of your code in solutions.py.
(h) Explain how to compute the test loss for the GD solution to the dual problem in the previous part. Implement this approach and report the test loss. what to submit: some commentary and a screen shot of your code, and a copy of your code in solutions.py.
(i) In general, it turns out that Newton’s method is much better than GD, in fact convergence of the Newton algorithm is quadratic, whereas convergence of GD is linear (much slower than quadratic). Given this, why do you think gradient descent and its variants (e.g. SGD) are much more popular for solving machine learning problems? what to submit: some commentary
請加QQ:99515681  郵箱:99515681@qq.com   WX:codehelp 

掃一掃在手機打開當前頁
  • 上一篇:代寫 CS 20A、代做 C++語言程序
  • 下一篇:人在國外辦理菲律賓簽證需要什么材料呢
  • 無相關信息
    合肥生活資訊

    合肥圖文信息
    2025年10月份更新拼多多改銷助手小象助手多多出評軟件
    2025年10月份更新拼多多改銷助手小象助手多
    有限元分析 CAE仿真分析服務-企業/產品研發/客戶要求/設計優化
    有限元分析 CAE仿真分析服務-企業/產品研發
    急尋熱仿真分析?代做熱仿真服務+熱設計優化
    急尋熱仿真分析?代做熱仿真服務+熱設計優化
    出評 開團工具
    出評 開團工具
    挖掘機濾芯提升發動機性能
    挖掘機濾芯提升發動機性能
    海信羅馬假日洗衣機亮相AWE  復古美學與現代科技完美結合
    海信羅馬假日洗衣機亮相AWE 復古美學與現代
    合肥機場巴士4號線
    合肥機場巴士4號線
    合肥機場巴士3號線
    合肥機場巴士3號線
  • 短信驗證碼 目錄網 排行網

    關于我們 | 打賞支持 | 廣告服務 | 聯系我們 | 網站地圖 | 免責聲明 | 幫助中心 | 友情鏈接 |

    Copyright © 2025 hfw.cc Inc. All Rights Reserved. 合肥網 版權所有
    ICP備06013414號-3 公安備 42010502001045

    高清久久一区| 欧美日韩国产观看视频| 日本在线中文字幕一区| 久久亚洲精品爱爱| 国产高清久久| 中文字幕视频精品一区二区三区| 欧美一级视频| 久久久久久夜| 午夜欧美精品| 精品免费在线| 亚洲深夜福利在线观看| 欧美一区不卡| xxxxx性欧美特大| 99香蕉国产精品偷在线观看| 玖玖玖免费嫩草在线影院一区| 高清久久精品| 亚洲国产导航| 午夜精品久久久久久久久久蜜桃| 亚洲欧美日韩国产| 天天躁日日躁狠狠躁欧美| 日本亚洲欧美天堂免费| 国内久久精品| 玖玖精品在线| 亚洲欧美在线成人| 国产精品精品| 蜜桃视频一区| aⅴ色国产欧美| 成人vr资源| 欧美日韩中文字幕一区二区三区| 亚洲天堂av资源在线观看| 美女久久99| 亚洲自拍偷拍网| 国产日韩欧美高清免费| 国产一区二区三区国产精品| 樱桃视频成人在线观看| 多野结衣av一区| 欧美hd在线| 国产精品xx| 成人激情诱惑| 国产精品伦理久久久久久| 香蕉亚洲视频| 午夜在线视频一区二区区别| 国产一区91| 老妇喷水一区二区三区| 亚洲欧美成人| 首页综合国产亚洲丝袜| 视频一区中文字幕| 成人激情诱惑| 亚洲人体视频| 日韩电影一区| 欧美日韩尤物久久| 先锋欧美三级| 日韩三级一区| 一区二区三区高清视频在线观看| 四虎成人精品一区二区免费网站| 97精品国产99久久久久久免费| 青青草国产一区二区三区| 亚洲成a人片777777久久| 日韩毛片网站| 麻豆91在线观看| 一区二区三区中文| 久久综合色占| 亚洲国产视频二区| 精品大片一区二区| 999视频精品| 狠狠入ady亚洲精品| 日韩在线一区二区| 国产色播av在线| 青娱乐极品盛宴一区二区| 久久xxx视频| 一区二区电影| 日韩精品亚洲aⅴ在线影院| 欧美日韩网站| 婷婷综合激情| 日韩专区一卡二卡| 经典三级一区二区| 日本不卡一二三区黄网| 国产麻豆一区二区三区| 亚洲综合色婷婷在线观看| 999久久久亚洲| 午夜在线一区| 九九精品调教| 国自产拍偷拍福利精品免费一| 久久综合色占| 久久久夜夜夜| 久久国产精品99国产| 天堂√中文最新版在线| 美女国产一区二区三区| 亚洲人亚洲人色久| 蜜桃一区av| 老司机精品福利视频| 91超碰碰碰碰久久久久久综合| 麻豆高清免费国产一区| 香蕉国产成人午夜av影院| 一区三区在线欧| 蜜桃精品视频在线| 久久精品女人| 永久免费精品视频| 欧美日韩四区| 欧美日韩破处视频| 亚洲另类av| 久草在线成人| 欧美三级网址| 亚洲国产精品嫩草影院久久av| 欧美黄色影院| 久久久久久久欧美精品 | 美日韩黄色大片| 老司机一区二区三区| 美女尤物国产一区| 精品国产鲁一鲁****| 尤物在线精品| 欧美成a人片免费观看久久五月天| 国产一区不卡| 狠久久av成人天堂| 成人深夜福利| av不卡一区| 色综合狠狠操| 一区二区电影在线观看| 久久精选视频| 免费看av不卡| 久久99青青| 国产精品婷婷| 亚洲全部视频| 亚洲第一毛片| se69色成人网wwwsex| 精品久久久久久久久久岛国gif| 天天影视天天精品| 亚洲国产精选| 国产女人18毛片水真多18精品| 国产精品精品| 亚欧洲精品视频在线观看| 亚洲第一偷拍| 美女看a上一区| 欧美综合另类| 久久精品国内一区二区三区| 国内精品麻豆美女在线播放视频| 国产精品99一区二区三| 久久99国产成人小视频| 亚洲免费影视| 国产成人三级| 免费人成在线不卡| 国产影视精品一区二区三区| 在线亚洲激情| 欧美电影院免费观看| 伊人久久亚洲美女图片| 国内一区二区三区| av成人毛片| 国产精品一国产精品| 日本欧美在线看| 亚洲精品合集| 超碰超碰人人人人精品| 精品国产三级| 国产综合色在线观看| 久久久久.com| 久久精品免费观看| 天天插综合网| 粉嫩av国产一区二区三区| 国产婷婷精品| 婷婷精品在线| 黄色在线网站噜噜噜| 国产一区二区三区亚洲| 国产一区二区| 欧美午夜不卡| 欧美精品momsxxx| 手机在线观看av| 成人羞羞在线观看网站| 久久一区二区三区四区五区| 在线一区免费| 国产精品片aa在线观看| www.51av欧美视频| 精品一区二区三区的国产在线观看| 成人在线黄色| 欧美网站在线| 精品久久久久久久久久岛国gif| 亚洲a∨精品一区二区三区导航| 欧美日韩麻豆| 欧美区日韩区| 日韩欧美在线中字| 国产真实久久| 国产一区二区三区网| 精品视频在线一区二区在线| 欧美日韩一二三四| 亚洲精品白浆高清| 欧美日韩女优| 红桃视频国产一区| 美国十次综合久久| 一本综合精品| 欧美mv日韩| 亚洲黑丝一区二区| 亚洲国产网址| 成人黄色毛片| 蜜臀91精品一区二区三区| 91嫩草精品| 国产精品视频一区二区三区| 精品丝袜在线| 亚洲一区二区三区高清| 欧美日韩一本| 国际精品欧美精品| yy6080久久伦理一区二区| 国产亚洲激情|