【2023年】第28天 tensorflow的介绍
2023/6/17 21:22:08
本文主要是介绍【2023年】第28天 tensorflow的介绍,对大家解决编程问题具有一定的参考价值,需要的程序猿们随着小编来一起学习吧!
1.Tensor + Flow = TensorFlow
A tensor is a generalization of vectors and matrices to potentially higher dimensions.
张量是向量和矩阵的概括,可能是更高维度的。
Internally, TensorFlow represents tensors as n-dimensional arrays of base datatypes.
在内部,TensorFlow将张量表示为基础数据类型的n维数组。
Let us start with a simple vector. A vector is commonly understood as something that has a magnitude and a direction.
让我们从一个简单的矢量开始。矢量通常被理解为有大小和方向的东西。
Simply put, it is an array that contains an ordered list of values.
简单地说,它是一个包含有序数值列表的数组。
Without the direction of a vector, a tensor becomes a scalar value that has only magnitude.
如果没有矢量的方向,张量就变成了一个只有大小的标量值。
A vector can be used to represent n number of things. It can represent area and different attributes, among other things.
一个向量可以用来表示N多的东西。它可以代表 面积和不同的属性,以及其他东西。
2.Components and Basis Vectors 分量和基向量
Let’s suppose we have a vector A, as shown in Figure.
假设我们有一个向量 ^A,如图所示
This is currently represented without any coordinate system consideration, but most of us are already aware of the Cartesian coordinate system (x, y, z axis).
目前在表示时没有考虑任何坐标系,但我们大多数人已经知道笛卡尔坐标系(x、y、z轴)。
If the vector A is represented in a three-dimensional space, it will look something like what is shown in Figure.
如果矢量A在三维空间中被表示,它将看起来像图中所示的那样。
This vector A can also be represented with the help of basis vectors.
这个向量A也可以用基向量的帮助来表示。
Basis vectors are associated with the coordinate system and can be used to represent any vector.
基准向量与坐标系相关,可用于表示任何向量。来表示任何矢量。
These basis vectors have a length of 1 and, hence, are also known as unit vectors.
这些基向量的长度为1,因此,也被称为单位向量。
The direction of these basis vectors is determined by their respective coordinates.
这些基向量的方向是由它们各自的坐标决定的。
For example, for three-dimensional representation, we have three basis vectors (x y, z), so x would have the direction of the x axis coordinate, and the y basis vector would have the direction of the y axis. Similarly, this would be the case for z.
例如,对于三维表示,我们有三个基向量(x、y、z),所以x会有x轴坐标的方向,而y基向量会有y轴的方向。类似地,z的情况也是如此。
Once the basis vectors are present, we can use the coordinate system to find the components that represent the original vector A.
一旦有了基向量,我们就可以用坐标系来寻找代表原始向量A的分量。
For simplicity, and to understand the components of the vector well, let’s reduce the coordinate system from three dimensions to two. So, now the vector A looks something like what is shown in Figure.
为了简单起见,也为了很好地理解矢量的组成部分,让我们把坐标系从三维缩小到二维。因此,现在的矢量A看起来就像图中所示的那样。
To find the first component of the vector A along the x axis, we will project it onto the x axis, as shown in Figure.
为了找到矢量A沿x轴的第一个分量,我们将把它投影到x轴上,如图所示。
Now, wherever the projection meets the x axis is known as the x component, or first component, of the vector.
现在,投影与X轴相遇的地方被称为矢量的X分量,或第一分量。
If you look carefully, you can easily recognize this x component as the sum of a few basis vectors along the x axis.
如果你仔细观察,你可以很容易地认识到这个X分量是沿X轴的几个基向量之和。
In this case, adding three basis vectors will give the x component of vector A.
在这种情况下,将三个基向量相加将得到向量A的X分量。
Similarly, we can find the y component of vector A by projecting it on the y axis and adding up the basis vectors (2y) along the y axis to represent it.
类似地,我们可以通过将向量 A 投影到 y 轴上并将沿 y 轴的基向量 (2y) 相加来表示它来找到向量 A 的 y 分量。
In simple terms, we can think of this as how much one has to move in the x axis direction and y axis direction in order to reach vector A.
简单地说,我们可以把它看作是一个人在X轴方向和Y轴方向要移动多少才能到达矢量A。
A = 3x + 2y
One other thing worth noting is that as the angle between vector A and the x axis increases, the x component decreases, but the y component increases.
还有一件值得注意的事情是,随着矢量A与X轴之间的角度增加,X分量减少,但Y分量增加。
Vectors are part of a bigger class of objects known as tensors.
向量是被称为张量的一大类对象的一部分。
If we end up multiplying a vector with another vector, we get a result that is a scalar quantity, whereas if we multiply a vector with a scalar value, it just increases or decreases in the same proportion, in terms of its magnitude, without changing its direction.
如果我们最终将一个向量与另一个向量相乘,我们得到的结果是一个标量,而如果我们将一个向量与一个标量值相乘,它只是以相同的比例增加或减少,就其大小而言,而不改变其方向。
However, if we multiply a vector with a tensor, it will result in a new vector that has a changed magnitude as well as a new direction.
然而,如果我们将一个向量与一个张量相乘,就会产生一个新的向量,这个向量的大小和方向都发生了变化。
3.Tensor
At the end of the day, a tensor is also a mathematical entity with which to represent different properties, similar to a scalar, vector, or matrix.
说到底,张量也是一个数学实体,用它来表示不同的属性,类似于标量、矢量或矩阵。
It is true that a tensor is a generalization of a scalar or vector.
诚然,张量是标量或矢量的概括。
In short, tensors are multidimensional arrays that have some dynamic properties.
简而言之,张量是具有一些动态特性的多维数组。
A vector is a one-dimensional tensor, whereas two-dimensional tensors are matrices.
矢量是一个一维张量,而二维张量是矩阵。
Tensors can be of two types: constant or variable.
张量可以有两种类型:常数或变量。
4.Rank 秩
Ranking tensors can sometimes be confusing for some people, but in terms of tensors, rank simply indicates the number of directions required to describe the properties of an object, meaning the dimensions of the array contained in the tensor itself.
张量的秩有时会让一些人感到困惑,但就张量而言,秩只是表示描述一个物体的属性所需的方向数,也就是张量本身所包含的数组的维度。
Breaking this down for different objects, a scalar doesn’t have any direction and, hence, automatically becomes a rank 0 tensor, whereas a vector, which can be described using only one direction, becomes a first rank tensor.
对不同的对象进行分解,标量没有任何方向,因此自动成为0级张量,而矢量可以只用一个方向来描述,成为一级张量。
The next object, which is a matrix, requires two directions to describe it and becomes a second rank tensor.
下一个对象,是一个矩阵、 需要两个方向来描述它,成为一个二阶张量。
5.Shape 形状
The shape of a tensor represents the number of values in each dimension.
张量的形状表示每个维度中值的数量。
Scalar—32: The shape of the tensor would be [ ].
32位的标量,这种类型的张量形状将是[ ]。
Vector—[3, 4, 5]: The shape of the first rank tensor would be [3].
矢量—[3, 4, 5]:一阶张量的形状。
1 2 3
Matrix = 4 5 6 : The second rank tensor would have a shape of [3,3].
7 8 9
二阶张量的形状为 [3, 3]。
6.Flow 流动
This is basically an underlying graph computation framework that uses tensors for its execution.
这基本上是一个使用张量执行的底层图形计算框架。
A typical graph consists of two entities: nodes and edges, as shown in Figure.
典型的图由两个实体组成:节点和边,如图所示。
Nodes are also called vertices.
节点也称为顶点。
The edges are essentially the connections between the nodes/vertices through which the data flows, and nodes are where actual computation takes place.
边基本上是数据流经的节点/顶点之间的连接,而节点是实际计算发生的地方。
Now, in general, the graph can be cyclic or acyclic, but in TensorFlow, it is always acyclic.
现在,一般来说,图可以是循环的或非循环的,但在 TensorFlow 中,它始终是非循环的。
It cannot start and end at the same node.
它不能在同一个节点开始和结束。
Let’s consider a simple computational graph, as shown in Figure, and explore some of its attributes.
让我们考虑一个简单的计算图,如图所示,并探讨其一些属性。
The nodes in the graph indicate some sort of computation, such as addition, multiplication, division, etc., except for the leaf nodes, which contain the actual tensors with either constant or variable values to be operated upon.
图中的节点表示某种计算,如加法、乘法、除法等,但叶子节点除外,叶子节点包含实际的张量,有常数或变量值可供操作。
These tensors flow through the edges or connections between nodes, and the computation at the next node results in formationof a new tensor.
这些张量流经节点之间的边缘或连接,在下一个节点的计算结果是形成一个新的张量。
So, in the sample graph, a new tensor m is created through a computation at the node using other tensors x and y.
因此,在样本图中,一个新的张量m是通过在节点上使用其他张量x和y进行计算而产生的。
The thing to focus on in this graph is that computations take place only at the next stage after leaf nodes, as leaf nodes can only be simple tensors, which become input for next-node computation flowing through edges.
在这个图中需要关注的是,计算只发生在叶子节点之后的下一个阶段,因为叶子节点只能是简单的张量,它成为通过边缘流动的下一个节点计算的输入。
We can also represent the computations at each node through a hierarchical structure.
我们也可以通过分层结构来表示每个节点的计算。
The nodes at the same level can be executed in parallel, as there is no interdependency between them.
同一级别的节点可以并行执行,因为它们之间没有相互依赖关系。
In this case, m and n can be calculated in parallel at the same time.
在这种情况下,可以同时计算m和n 同时并行计算。
This attribute of graph helps to execute computational graphs in a distributed manner, which allows TensorFlow to be used for large-scale applications.
图的这一属性有助于以分布式方式执行计算图,这使得TensorFlow可以用于大规模应用。
7.TensorFlow 1.0 vs. TensorFlow 2.0
- TensorFlow 2.0 doesn’t require the graph definition.
- TensorFlow 2.0 doesn’t require the session execution.
- TensorFlow 2.0 doesn’t make it mandatory to initialize variables.
- TensorFlow 2.0 doesn’t require variable sharing via scopes.
- TensorFlow 2.0不需要图的定义。
- TensorFlow 2.0不要求会话执行。
- TensorFlow 2.0不需要强制初始化变量。
- TensorFlow 2.0不要求通过作用域共享变量。
这篇关于【2023年】第28天 tensorflow的介绍的文章就介绍到这儿,希望我们推荐的文章对大家有所帮助,也希望大家多多支持为之网!
- 2024-10-30tensorflow是什么-icode9专业技术文章分享
- 2024-10-15成功地使用本地的 NVIDIA GPU 运行 PyTorch 或 TensorFlow
- 2024-01-23供应链投毒预警 | 恶意Py包仿冒tensorflow AI框架实施后门投毒攻击
- 2024-01-19attributeerror: module 'tensorflow' has no attribute 'placeholder'
- 2024-01-19module 'tensorflow.compat.v2' has no attribute 'internal'
- 2023-07-17【2023年】第33天 Neural Networks and Deep Learning with TensorFlow
- 2023-07-10【2023年】第32天 Boosted Trees with TensorFlow 2.0(随机森林)
- 2023-07-09【2023年】第31天 Logistic Regression with TensorFlow 2.0(用TensorFlow进行逻辑回归)
- 2023-07-01【2023年】第30天 Supervised Learning with TensorFlow 2(用TensorFlow进行监督学习 2)
- 2023-06-18【2023年】第29天 Supervised Learning with TensorFlow 1(用TensorFlow进行监督学习 1)