Skip to content

Electromobile_In_Elevator_Detection #861

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 13 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
Empty file modified .clang_format.hook
100644 → 100755
Empty file.
Empty file modified .gitignore
100644 → 100755
Empty file.
Empty file modified .gitkeep
100644 → 100755
Empty file.
Empty file modified .pre-commit-config.yaml
100644 → 100755
Empty file.
Empty file modified .readthedocs.yaml
100644 → 100755
Empty file.
2 changes: 1 addition & 1 deletion Dive-into-DL-paddlepaddle/FAQ.md
100644 → 100755
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
# FAQ

大家好,本页面用来记录在使用飞桨核心框架开发本书籍时,开发者遇到的问题。
大家好,本页面用来记录在使用飞桨核心框架开发本书籍时,开发者遇到的问题。
2 changes: 1 addition & 1 deletion Dive-into-DL-paddlepaddle/README.md
100644 → 100755
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@


## 原书地址
中文版:[动手学深度学习](https://zh.d2l.ai/) | [Github仓库](https://github.com/d2l-ai/d2l-zh)
中文版:[动手学深度学习](https://zh.d2l.ai/) | [Github仓库](https://github.com/d2l-ai/d2l-zh)
English Version: [Dive into Deep Learning](https://d2l.ai/) | [Github Repo](https://github.com/d2l-ai/d2l-en)


Expand Down
1 change: 0 additions & 1 deletion ...nto-DL-paddlepaddle/docs/10_attention-mechanisms/attention-scoring-functions.md
100644 → 100755
Original file line number Diff line number Diff line change
Expand Up @@ -269,4 +269,3 @@ d2l.show_heatmaps(attention.attention_weights.reshape((1, 1, 2, 10)),


[Discussions](https://discuss.d2l.ai/t/5752)

1 change: 0 additions & 1 deletion Dive-into-DL-paddlepaddle/docs/10_attention-mechanisms/bahdanau-attention.md
100644 → 100755
Original file line number Diff line number Diff line change
Expand Up @@ -224,4 +224,3 @@ d2l.show_heatmaps(


[Discussions](https://discuss.d2l.ai/t/5754)

1 change: 0 additions & 1 deletion Dive-into-DL-paddlepaddle/docs/10_attention-mechanisms/multihead-attention.md
100644 → 100755
Original file line number Diff line number Diff line change
Expand Up @@ -179,4 +179,3 @@ attention(X, Y, Y, valid_lens).shape


[Discussions](https://discuss.d2l.ai/t/5758)

1 change: 0 additions & 1 deletion Dive-into-DL-paddlepaddle/docs/10_attention-mechanisms/nadaraya-waston.md
100644 → 100755
Original file line number Diff line number Diff line change
Expand Up @@ -321,4 +321,3 @@ d2l.show_heatmaps(net.attention_weight.unsqueeze(0).unsqueeze(0),


[Discussions](https://discuss.d2l.ai/t/5760)

3 changes: 1 addition & 2 deletions ...lepaddle/docs/10_attention-mechanisms/self-attention-and-positional-encoding.md
100644 → 100755
Original file line number Diff line number Diff line change
Expand Up @@ -193,7 +193,7 @@ $$\begin{aligned}
\begin{bmatrix} p_{i, 2j} \\ p_{i, 2j+1} \\ \end{bmatrix}\\
=&\begin{bmatrix} \cos(\delta \omega_j) \sin(i \omega_j) + \sin(\delta \omega_j) \cos(i \omega_j) \\ -\sin(\delta \omega_j) \sin(i \omega_j) + \cos(\delta \omega_j) \cos(i \omega_j) \\ \end{bmatrix}\\
=&\begin{bmatrix} \sin\left((i+\delta) \omega_j\right) \\ \cos\left((i+\delta) \omega_j\right) \\ \end{bmatrix}\\
=&
=&
\begin{bmatrix} p_{i+\delta, 2j} \\ p_{i+\delta, 2j+1} \\ \end{bmatrix},
\end{aligned}$$

Expand All @@ -212,4 +212,3 @@ $2\times 2$投影矩阵不依赖于任何位置的索引$i$。


[Discussions](https://discuss.d2l.ai/t/5762)

1 change: 0 additions & 1 deletion Dive-into-DL-paddlepaddle/docs/10_attention-mechanisms/transformer.md
100644 → 100755
Original file line number Diff line number Diff line change
Expand Up @@ -422,4 +422,3 @@ d2l.show_heatmaps(


[Discussions](https://discuss.d2l.ai/t/5756)

Empty file.
Empty file.
Empty file.
5 changes: 2 additions & 3 deletions Dive-into-DL-paddlepaddle/docs/11_optimization_algorithm/convexity.md
100644 → 100755
Original file line number Diff line number Diff line change
Expand Up @@ -201,7 +201,7 @@ $$\lambda f(b) + (1-\lambda)f(a) \geq f((1-\lambda)a + \lambda b),$$
$f: \mathbb{R}^n \rightarrow \mathbb{R}$
是凸的当且仅当对于所有$\mathbf{x}, \mathbf{y} \in \mathbb{R}^n$

$$g(z) \stackrel{\mathrm{def}}{=} f(z \mathbf{x} + (1-z) \mathbf{y}) \text{ where } z \in [0,1]$$
$$g(z) \stackrel{\mathrm{def}}{=} f(z \mathbf{x} + (1-z) \mathbf{y}) \text{ where } z \in [0,1]$$

是凸的。

Expand Down Expand Up @@ -317,7 +317,7 @@ $$\mathrm{Proj}_\mathcal{X}(\mathbf{x}) = \mathop{\mathrm{argmin}}_{\mathbf{x}'
* 凸约束可以通过拉格朗日函数来添加。在实践中,只需在目标函数中加上一个惩罚就可以了。
* 投影映射到凸集中最接近原始点的点。

## 练习
## 练习

1. 假设我们想要通过绘制集合内点之间的所有直线并检查这些直线是否包含来验证集合的凸性。
i.证明只检查边界上的点是充分的。
Expand All @@ -339,4 +339,3 @@ i.作为中间步骤,写出惩罚目标$|\mathbf{w} - \mathbf{w}'|_2^2 + \lamb
ii.你能无须反复试错就找到$\lambda$的“正确”值吗?

9. 给定一个凸集$\mathcal{X}$和两个向量$\mathbf{X}$和$\mathbf{y}$证明了投影不会增加距离,即$\|\mathbf{x} - \mathbf{y}\| \geq \|\mathrm{Proj}_\mathcal{X}(\mathbf{x}) - \mathrm{Proj}_\mathcal{X}(\mathbf{y})\|$。

1 change: 0 additions & 1 deletion Dive-into-DL-paddlepaddle/docs/11_optimization_algorithm/minibatch-sgd.md
100644 → 100755
Original file line number Diff line number Diff line change
Expand Up @@ -352,4 +352,3 @@ train_concise_ch11(trainer, {'learning_rate': 0.01}, data_iter)
1. 修改批量大小和学习率,并观察目标函数值的下降率以及每个迭代轮数消耗的时间。
1. 将小批量随机梯度下降与实际从训练集中*取样替换*的变体进行比较。会看出什么?
1. 一个邪恶的精灵在没通知你的情况下复制了你的数据集(即每个观测发生两次,你的数据集增加到原始大小的两倍,但没有人告诉你)。随机梯度下降、小批量随机梯度下降和梯度下降的表现将如何变化?

1 change: 0 additions & 1 deletion Dive-into-DL-paddlepaddle/docs/11_optimization_algorithm/momentum.md
100644 → 100755
Original file line number Diff line number Diff line change
Expand Up @@ -332,4 +332,3 @@ $$
1. 试试梯度下降和动量法来解决一个二次问题,其中你有多个特征值,即$f(x) = \frac{1}{2} \sum_i \lambda_i x_i^2$,例如$\lambda_i = 2^{-i}$。绘制出$x$的值在初始化$x_i = 1$时如何下降。
1. 推导$h(\mathbf{x}) = \frac{1}{2} \mathbf{x}^\top \mathbf{Q} \mathbf{x} + \mathbf{x}^\top \mathbf{c} + b$的最小值和最小化器。
1. 当我们执行带动量法的随机梯度下降时会有什么变化?当我们使用带动量法的小批量随机梯度下降时会发生什么?试验参数如何?

1 change: 0 additions & 1 deletion Dive-into-DL-paddlepaddle/docs/11_optimization_algorithm/optimization-intro.md
100644 → 100755
Original file line number Diff line number Diff line change
Expand Up @@ -141,4 +141,3 @@ annotate('vanishing gradient', (4, 1), (2, 0.0))
1. 假设你想在(真实的)鞍上平衡一个(真实的)球。
1. 为什么这很难?
1. 你也能利用这种效应来优化算法吗?

1 change: 0 additions & 1 deletion Dive-into-DL-paddlepaddle/docs/11_optimization_algorithm/sgd.md
100644 → 100755
Original file line number Diff line number Diff line change
Expand Up @@ -211,4 +211,3 @@ $${n \choose 1} \frac{1}{n} \left(1-\frac{1}{n}\right)^{n-1} = \frac{n}{n-1} \le
1. 比较随机梯度下降的收敛性,当你从$\{(x_1, y_1), \ldots, (x_n, y_n)\}$使用替换方法进行采样时以及在不替换的情况下进行采样时
1. 如果某些梯度(或者更确切地说与之相关的某些坐标)始终比所有其他梯度都大,你将如何更改随机梯度下降求解器?
1. 假设是$f(x) = x^2 (1 + \sin x)$。$f$有多少局部最小值?你能改变$f$以尽量减少它需要评估所有局部最小值的方式吗?

1 change: 0 additions & 1 deletion Dive-into-DL-paddlepaddle/docs/1_introduction/qy.md
100644 → 100755
Original file line number Diff line number Diff line change
Expand Up @@ -751,4 +751,3 @@ Canny边缘检测器 :cite:`Canny.1987` 和SIFT特征提取器 :cite:`Lowe.2004`
1. 你还可以在哪里应用端到端的训练方法,比如 :numref:`fig_ml_loop` 、物理、工程和计量经济学?

[Discussions](https://discuss.d2l.ai/t/2088)

Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
7 changes: 3 additions & 4 deletions Dive-into-DL-paddlepaddle/docs/2_Preparatory-knowledge/2.1.md
100644 → 100755
Original file line number Diff line number Diff line change
Expand Up @@ -109,7 +109,7 @@ paddle.zeros((2, 3, 4))
[[[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.]],

[[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.]]])
Expand All @@ -131,7 +131,7 @@ paddle.ones((2, 3, 4))
[[[1., 1., 1., 1.],
[1., 1., 1., 1.],
[1., 1., 1., 1.]],

[[1., 1., 1., 1.],
[1., 1., 1., 1.],
[1., 1., 1., 1.]]])
Expand Down Expand Up @@ -487,7 +487,7 @@ a, a.item(), float(a), int(a)
<ipython-input-35-ce4f5f8b2bde> in <module>
1 a = paddle.to_tensor([3.5])
----> 2 a, a.item(), float(a), int(a)


AttributeError: 'Tensor' object has no attribute 'item'

Expand All @@ -503,4 +503,3 @@ a, a.item(), float(a), int(a)


[Discussions](https://discuss.d2l.ai/t/1747)

Empty file.
1 change: 0 additions & 1 deletion Dive-into-DL-paddlepaddle/docs/2_Preparatory-knowledge/2.2.md
100644 → 100755
Original file line number Diff line number Diff line change
Expand Up @@ -119,4 +119,3 @@ X, y


[Discussions](https://discuss.d2l.ai/t/1750)

Empty file.
5 changes: 2 additions & 3 deletions Dive-into-DL-paddlepaddle/docs/2_Preparatory-knowledge/2.3.md
100644 → 100755
Original file line number Diff line number Diff line change
Expand Up @@ -239,7 +239,7 @@ X
[[[0 , 1 , 2 , 3 ],
[4 , 5 , 6 , 7 ],
[8 , 9 , 10, 11]],

[[12, 13, 14, 15],
[16, 17, 18, 19],
[20, 21, 22, 23]]])
Expand Down Expand Up @@ -323,7 +323,7 @@ a + X, (a * X).shape
[[[2 , 3 , 4 , 5 ],
[6 , 7 , 8 , 9 ],
[10, 11, 12, 13]],

[[14, 15, 16, 17],
[18, 19, 20, 21],
[22, 23, 24, 25]]]), [2, 3, 4])
Expand Down Expand Up @@ -820,4 +820,3 @@ paddle.norm(paddle.ones(shape=[4, 9], dtype='float32'))


[Discussions](https://discuss.d2l.ai/t/1751)

Empty file.
1 change: 0 additions & 1 deletion Dive-into-DL-paddlepaddle/docs/2_Preparatory-knowledge/2.4.md
100644 → 100755
Original file line number Diff line number Diff line change
Expand Up @@ -249,4 +249,3 @@ $$\frac{dy}{dx_i} = \frac{dy}{du_1} \frac{du_1}{dx_i} + \frac{dy}{du_2} \frac{du


[Discussions](https://discuss.d2l.ai/t/1756)

Empty file.
3 changes: 1 addition & 2 deletions Dive-into-DL-paddlepaddle/docs/2_Preparatory-knowledge/2.5.md
100644 → 100755
Original file line number Diff line number Diff line change
Expand Up @@ -119,7 +119,7 @@ x.grad
```python
x.clear_gradient()
y = x * x
paddle.sum(y).backward()
paddle.sum(y).backward()
x.grad
```

Expand Down Expand Up @@ -234,4 +234,3 @@ a.grad == d / a


[Discussions](https://discuss.d2l.ai/t/1759)

Empty file.
27 changes: 13 additions & 14 deletions Dive-into-DL-paddlepaddle/docs/2_Preparatory-knowledge/2.7.md
100644 → 100755
Original file line number Diff line number Diff line change
Expand Up @@ -32,40 +32,40 @@ help(paddle.ones)
```

Help on function ones in module paddle.tensor.creation:

ones(shape, dtype=None, name=None)
The OP creates a tensor of specified :attr:`shape` and :attr:`dtype`, and fills it with 1.

Args:
shape(tuple|list|Tensor): Shape of the Tensor to be created, the data type of shape is int32 or int64.
dtype(np.dtype|str, optional): Data type of output Tensor, it supports
bool, float16, float32, float64, int32 and int64. Default: if None, the data type is 'float32'.
name(str, optional): The default value is None. Normally there is no need for user to set this property. For more information, please refer to :ref:`api_guide_Name`

Returns:
Tensor: A tensor of data type :attr:`dtype` with shape :attr:`shape` and all elements set to 1.

Examples:
.. code-block:: python
import paddle

import paddle

# default dtype for ones OP
data1 = paddle.ones(shape=[3, 2])
data1 = paddle.ones(shape=[3, 2])
# [[1. 1.]
# [1. 1.]
# [1. 1.]]
data2 = paddle.ones(shape=[2, 2], dtype='int32')

data2 = paddle.ones(shape=[2, 2], dtype='int32')
# [[1 1]
# [1 1]]

# shape is a Tensor
shape = paddle.full(shape=[2], dtype='int32', fill_value=2)
data3 = paddle.ones(shape=shape, dtype='int32')
data3 = paddle.ones(shape=shape, dtype='int32')
# [[1 1]
# [1 1]]



从文档中,我们可以看到 `ones` 函数创建一个具有指定形状的新张量,并将所有元素值设置为 1。让我们来[**运行一个快速测试**]来确认这一解释:
Expand Down Expand Up @@ -97,4 +97,3 @@ paddle.ones([4], dtype='float32')


[Discussions](https://discuss.d2l.ai/t/1765)

Empty file.
1 change: 0 additions & 1 deletion Dive-into-DL-paddlepaddle/docs/3_linear-networks/3.1.md
100644 → 100755
Original file line number Diff line number Diff line change
Expand Up @@ -314,4 +314,3 @@ $$-\log P(\mathbf y \mid \mathbf X) = \sum_{i=1}^n \frac{1}{2} \log(2 \pi \sigma


[Discussions](https://discuss.d2l.ai/t/1775)

Empty file.
4 changes: 2 additions & 2 deletions Dive-into-DL-paddlepaddle/docs/3_linear-networks/3.2.md
100644 → 100755
Original file line number Diff line number Diff line change
Expand Up @@ -246,7 +246,7 @@ def squared_loss(y_hat, y):


```python
def sgd(params, lr, batch_size):
def sgd(params, lr, batch_size):
"""小批量随机梯度下降。"""
with torch.no_grad():
for param in params:
Expand Down Expand Up @@ -275,7 +275,7 @@ def pddle_sgd(params, lr, batch_size):
- 初始化参数
- 重复,直到完成
- 计算梯度 $g \leftarrow \partial_{(w,b)}\frac{1}{\vert B\vert}\sum_{i \in B}l(x^{(i)},y^{(i)},w,b)$
- 更新参数$(w,b) \leftarrow (w,b) \leftarrow \eta g$
- 更新参数$(w,b) \leftarrow (w,b) \leftarrow \eta g$

在每个迭代周期(epoch)中,我们使用`data_iter`函数遍历整个数据集,并将训练数据集中所有样本都使用一次(假设样本数能够被批量大小整除)。这里的迭代周期个数`num_epochs`和学习率`lr`都是超参数,分别设为3和0.03。设置超参数很棘手,需要通过反复试验进行调整。 我们现在忽略这些细节,以后会在 2节 中详细介绍。

Expand Down
Empty file modified Dive-into-DL-paddlepaddle/docs/3_linear-networks/3.3.md
100644 → 100755
Empty file.
1 change: 0 additions & 1 deletion Dive-into-DL-paddlepaddle/docs/3_linear-networks/3.4.md
100644 → 100755
Original file line number Diff line number Diff line change
Expand Up @@ -203,4 +203,3 @@ $$H[P] = \sum_j - P(j) \log P(j).$$
1. 将其扩展到两个以上的数字。

[Discussions](https://discuss.d2l.ai/t/1785)

Empty file.
Empty file modified Dive-into-DL-paddlepaddle/docs/3_linear-networks/3.5.md
100644 → 100755
Empty file.
10 changes: 5 additions & 5 deletions Dive-into-DL-paddlepaddle/docs/3_linear-networks/3.6.md
100644 → 100755
Original file line number Diff line number Diff line change
Expand Up @@ -102,7 +102,7 @@ paddle_X.sum(0, keepdim=True), paddle_X.sum(1, keepdim=True)
```

我们现在已经准备好实现softmax操作了。回想一下,softmax由三个步骤组成:


* 对每个项求幂(使用exp);
* 对每一行求和(小批量中每个样本是一行),得到每个样本的归一化常数;
Expand All @@ -111,7 +111,7 @@ paddle_X.sum(0, keepdim=True), paddle_X.sum(1, keepdim=True)
在查看代码之前,让我们回顾一下这个表达式

$$softmax(X)_{ij}=\frac{exp(X_{ij})}{\sum_k exp(X_{ik})}$$

分母或归一化常数,有时也称为配分函数(其对数称为对数-配分函数)。该名称的起源来自统计物理学中一个模拟粒子群分布的方程。


Expand Down Expand Up @@ -221,7 +221,7 @@ cross_entropy(paddle_y_hat, paddle_y)


```python
def accuracy(y_hat, y):
def accuracy(y_hat, y):
"""计算预测正确的数量。"""
if len(y_hat.shape) > 1 and y_hat.shape[1] > 1:
y_hat = y_hat.argmax(axis=1)
Expand Down Expand Up @@ -255,7 +255,7 @@ def evaluate_accuracy(net, data_iter):


```python
class Accumulator:
class Accumulator:
"""在`n`个变量上累加。"""
def __init__(self, n):
self.data = [0.0] * n
Expand Down Expand Up @@ -396,7 +396,7 @@ class Animator:


```python
def train_ch3(net, train_iter, test_iter, loss, num_epochs, updater):
def train_ch3(net, train_iter, test_iter, loss, num_epochs, updater):
"""训练模型(定义见第3章)。"""
animator = Animator(xlabel='epoch', xlim=[1, num_epochs], ylim=[0.3, 0.9],
legend=['train loss', 'train acc', 'test acc'])
Expand Down
Empty file modified Dive-into-DL-paddlepaddle/docs/3_linear-networks/3.7.md
100644 → 100755
Empty file.
13 changes: 6 additions & 7 deletions Dive-into-DL-paddlepaddle/docs/4_multilayer-perceptrons/4.1.md
100644 → 100755
Original file line number Diff line number Diff line change
Expand Up @@ -131,9 +131,9 @@ d2l.plot(x.detach().numpy(), x.grad.numpy(), 'x', 'grad of relu', figsize=(5, 2.

C:\Users\WeiWu-GU\anaconda3\envs\pte\lib\site-packages\ipykernel\ipkernel.py:287: DeprecationWarning: `should_run_async` will not call `transform_cell` automatically in the future. Please pass the result to `transformed_cell` argument and any exception that happen during thetransform in `preprocessing_exc_tuple` in IPython 7.17 and above.
and should_run_async(code)
C:\Users\WeiWu-GU\anaconda3\envs\pte\lib\site-packages\paddle\fluid\dygraph\varbase_patch_methods.py:382: UserWarning:
C:\Users\WeiWu-GU\anaconda3\envs\pte\lib\site-packages\paddle\fluid\dygraph\varbase_patch_methods.py:382: UserWarning:
Warning:
tensor.grad will return the tensor value of the gradient. This is an incompatible upgrade for tensor.grad API. It's return type changes from numpy.ndarray in version 2.0 to paddle.Tensor in version 2.1.0. If you want to get the numpy value of the gradient, you can use :code:`x.grad.numpy()`
tensor.grad will return the tensor value of the gradient. This is an incompatible upgrade for tensor.grad API. It's return type changes from numpy.ndarray in version 2.0 to paddle.Tensor in version 2.1.0. If you want to get the numpy value of the gradient, you can use :code:`x.grad.numpy()`
warnings.warn(warning_msg)
C:\Users\WeiWu-GU\anaconda3\envs\pte\lib\site-packages\d2l\torch.py:41: DeprecationWarning: `set_matplotlib_formats` is deprecated since IPython 7.23, directly use `matplotlib_inline.backend_inline.set_matplotlib_formats()`
display.set_matplotlib_formats('svg')
Expand Down Expand Up @@ -195,9 +195,9 @@ d2l.plot(x.detach().numpy(), x.grad.numpy(), 'x', 'grad of sigmoid', figsize=(5,

C:\Users\WeiWu-GU\anaconda3\envs\pte\lib\site-packages\ipykernel\ipkernel.py:287: DeprecationWarning: `should_run_async` will not call `transform_cell` automatically in the future. Please pass the result to `transformed_cell` argument and any exception that happen during thetransform in `preprocessing_exc_tuple` in IPython 7.17 and above.
and should_run_async(code)
C:\Users\WeiWu-GU\anaconda3\envs\pte\lib\site-packages\paddle\fluid\dygraph\varbase_patch_methods.py:382: UserWarning:
C:\Users\WeiWu-GU\anaconda3\envs\pte\lib\site-packages\paddle\fluid\dygraph\varbase_patch_methods.py:382: UserWarning:
Warning:
tensor.grad will return the tensor value of the gradient. This is an incompatible upgrade for tensor.grad API. It's return type changes from numpy.ndarray in version 2.0 to paddle.Tensor in version 2.1.0. If you want to get the numpy value of the gradient, you can use :code:`x.grad.numpy()`
tensor.grad will return the tensor value of the gradient. This is an incompatible upgrade for tensor.grad API. It's return type changes from numpy.ndarray in version 2.0 to paddle.Tensor in version 2.1.0. If you want to get the numpy value of the gradient, you can use :code:`x.grad.numpy()`
warnings.warn(warning_msg)
C:\Users\WeiWu-GU\anaconda3\envs\pte\lib\site-packages\d2l\torch.py:41: DeprecationWarning: `set_matplotlib_formats` is deprecated since IPython 7.23, directly use `matplotlib_inline.backend_inline.set_matplotlib_formats()`
display.set_matplotlib_formats('svg')
Expand Down Expand Up @@ -249,9 +249,9 @@ d2l.plot(x.detach().numpy(), x.grad.numpy(), 'x', 'grad of tanh', figsize=(5, 2.

C:\Users\WeiWu-GU\anaconda3\envs\pte\lib\site-packages\ipykernel\ipkernel.py:287: DeprecationWarning: `should_run_async` will not call `transform_cell` automatically in the future. Please pass the result to `transformed_cell` argument and any exception that happen during thetransform in `preprocessing_exc_tuple` in IPython 7.17 and above.
and should_run_async(code)
C:\Users\WeiWu-GU\anaconda3\envs\pte\lib\site-packages\paddle\fluid\dygraph\varbase_patch_methods.py:382: UserWarning:
C:\Users\WeiWu-GU\anaconda3\envs\pte\lib\site-packages\paddle\fluid\dygraph\varbase_patch_methods.py:382: UserWarning:
Warning:
tensor.grad will return the tensor value of the gradient. This is an incompatible upgrade for tensor.grad API. It's return type changes from numpy.ndarray in version 2.0 to paddle.Tensor in version 2.1.0. If you want to get the numpy value of the gradient, you can use :code:`x.grad.numpy()`
tensor.grad will return the tensor value of the gradient. This is an incompatible upgrade for tensor.grad API. It's return type changes from numpy.ndarray in version 2.0 to paddle.Tensor in version 2.1.0. If you want to get the numpy value of the gradient, you can use :code:`x.grad.numpy()`
warnings.warn(warning_msg)
C:\Users\WeiWu-GU\anaconda3\envs\pte\lib\site-packages\d2l\torch.py:41: DeprecationWarning: `set_matplotlib_formats` is deprecated since IPython 7.23, directly use `matplotlib_inline.backend_inline.set_matplotlib_formats()`
display.set_matplotlib_formats('svg')
Expand All @@ -277,4 +277,3 @@ d2l.plot(x.detach().numpy(), x.grad.numpy(), 'x', 'grad of tanh', figsize=(5, 2.


[Discussions](https://discuss.d2l.ai/t/1796)

1 change: 0 additions & 1 deletion Dive-into-DL-paddlepaddle/docs/4_multilayer-perceptrons/4.10.md
100644 → 100755
Original file line number Diff line number Diff line change
Expand Up @@ -383,4 +383,3 @@ train_and_pred(train_features, test_features, train_labels, test_data,


[Discussions](https://discuss.d2l.ai/t/1824)

Empty file.
Empty file.
7 changes: 3 additions & 4 deletions Dive-into-DL-paddlepaddle/docs/4_multilayer-perceptrons/4.2.md
100644 → 100755
Original file line number Diff line number Diff line change
Expand Up @@ -105,7 +105,7 @@ pd2l.train_ch3(net, train_iter, test_iter, loss, num_epochs, updater)
1 num_epochs, lr = 10, 0.1
2 updater = paddle.optimizer.SGD(learning_rate=lr, parameters=params)
----> 3 pd2l.train_ch3(net, train_iter, test_iter, loss, num_epochs, updater)


D:\workspace\d2ltopaddle\chapter_multilayer-perceptrons\pd2l.py in train_ch3(net, train_iter, test_iter, loss, num_epochs, updater)
324 legend=['train loss', 'train acc', 'test acc'])
Expand All @@ -127,13 +127,13 @@ pd2l.train_ch3(net, train_iter, test_iter, loss, num_epochs, updater)
2 X = X.reshape((-1, num_inputs))
3 H = relu(X@W1 + b1) # 这里“@”代表矩阵乘法
----> 4 return (H@W2 + b2)


~\anaconda3\envs\pte\lib\site-packages\paddle\fluid\dygraph\math_op_patch.py in __impl__(self, other_var)
248 axis = -1
249 math_op = getattr(core.ops, op_type)
--> 250 return math_op(self, other_var, 'axis', axis)
251
251
252 comment = OpProtoHolder.instance().get_op_proto(op_type).comment


Expand Down Expand Up @@ -170,4 +170,3 @@ d2l.predict_ch3(net, test_iter)


[Discussions](https://discuss.d2l.ai/t/1804)

Empty file.
1 change: 0 additions & 1 deletion Dive-into-DL-paddlepaddle/docs/4_multilayer-perceptrons/4.3.md
100644 → 100755
Original file line number Diff line number Diff line change
Expand Up @@ -64,4 +64,3 @@ train_iter, test_iter = pd2l.load_data_fashion_mnist(batch_size)


[Discussions](https://discuss.d2l.ai/t/1802)

Empty file.
Loading