From d22104cd0fabc2f3ff5f76fc6f7cfe56dc165ba5 Mon Sep 17 00:00:00 2001 From: Ricardo-shuo-liu <13838152117@139.com> Date: Mon, 3 Nov 2025 23:11:53 +0800 Subject: [PATCH] fix-c19-c23 --- _typos.toml | 7 ------- docs/design/data_type/float16.md | 4 ++-- docs/design/dynamic_rnn/rnn_design.md | 2 +- docs/design/dynamic_rnn/rnn_design_en.md | 2 +- docs/design/mkldnn/int8/QAT/C++.md | 2 +- docs/design/motivation/api.md | 2 +- docs/dev_guides/sugon/paddle_c86_cn.md | 2 +- docs/guides/model_convert/update_en.md | 2 +- 8 files changed, 8 insertions(+), 15 deletions(-) diff --git a/_typos.toml b/_typos.toml index 92ea00cff15..64560a9bec4 100644 --- a/_typos.toml +++ b/_typos.toml @@ -40,12 +40,6 @@ Simle = "Simle" Sovler = "Sovler" Successed = "Successed" classfy = "classfy" -contxt = "contxt" -convertion = "convertion" -convinience = "convinience" -correponding = "correponding" -corresonding = "corresonding" -correspoinding = "correspoinding" corss = "corss" creatation = "creatation" creats = "creats" @@ -135,7 +129,6 @@ similary = "similary" simplier = "simplier" skiped = "skiped" softwares = "softwares" -sould = "sould" specail = "specail" sperated = "sperated" splited = "splited" diff --git a/docs/design/data_type/float16.md b/docs/design/data_type/float16.md index 4081fd6903b..af9d2e1d888 100644 --- a/docs/design/data_type/float16.md +++ b/docs/design/data_type/float16.md @@ -93,7 +93,7 @@ To support the above features, two fundamental conversion functions are provided float16 float_to_half_rn(float f); // convert to half precision in round-to-nearest-even mode float half_to_float(float16 h); ``` -which provides one-to-one conversion between float32 and float16. These twos functions will do different conversion routines based on the current hardware. CUDA/ARM instrinsics will be used when the corresonding hardware is available. If the hardware or compiler level does not support float32 to float16 conversion, software emulation will be performed to do the conversion. +which provides one-to-one conversion between float32 and float16. These twos functions will do different conversion routines based on the current hardware. CUDA/ARM instrinsics will be used when the corresponding hardware is available. If the hardware or compiler level does not support float32 to float16 conversion, software emulation will be performed to do the conversion. ## float16 inference In Fluid, a neural network is represented as a protobuf message called [ProgramDesc](https://github.com/PaddlePaddle/docs/blob/develop/docs/design/concepts/program.md), whose Python wrapper is a [Program](https://github.com/PaddlePaddle/docs/blob/develop/docs/design/modules/python_api.md#program). The basic structure of a program is some nested [blocks](https://github.com/PaddlePaddle/docs/blob/develop/docs/design/modules/python_api.md#block), where each block consists of some [variable](https://github.com/PaddlePaddle/docs/blob/develop/docs/design/modules/python_api.md#variable) definitions and a sequence of [operators](https://github.com/PaddlePaddle/docs/blob/develop/docs/design/modules/python_api.md#operator). An [executor](https://github.com/PaddlePaddle/docs/blob/develop/docs/design/concepts/executor.md) will run a given program desc by executing the sequence of operators in the entrance block of the program one by one. @@ -112,7 +112,7 @@ Operators including convolution and multiplication (used in fully-connected laye When these operators are running in float16 mode, the float16 kernel requires those parameter variables to contain weights of Fluid float16 data type. Thus, we need a convenient way to convert the original float weights to float16 weights. -In Fluid, we use tensor to hold actual data for a variable on the c++ end. [Pybind](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/pybind/tensor_py.h) is used to bind c++ tensors of certain data type with numpy array of the correponding numpy data type on the Python end. Each common c++ built-in data type has a corresponding numpy data type of the same name. However, since there is no built-in float16 type in c++, we cannot directly bind numpy float16 data type with the Fluid float16 class. Since both Fluid float16 and numpy float16 use uint16 as the internal data storage type, we use c++ built-in type `uint16_t` and the corresponding numpy uint16 data type to bridge the gap via [Pybind](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/pybind/tensor_py.h). +In Fluid, we use tensor to hold actual data for a variable on the c++ end. [Pybind](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/pybind/tensor_py.h) is used to bind c++ tensors of certain data type with numpy array of the corresponding numpy data type on the Python end. Each common c++ built-in data type has a corresponding numpy data type of the same name. However, since there is no built-in float16 type in c++, we cannot directly bind numpy float16 data type with the Fluid float16 class. Since both Fluid float16 and numpy float16 use uint16 as the internal data storage type, we use c++ built-in type `uint16_t` and the corresponding numpy uint16 data type to bridge the gap via [Pybind](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/pybind/tensor_py.h). The following code demonstrates how to do the tensor conversion. ```Python diff --git a/docs/design/dynamic_rnn/rnn_design.md b/docs/design/dynamic_rnn/rnn_design.md index 1c5fde8f403..b04b6f624d9 100644 --- a/docs/design/dynamic_rnn/rnn_design.md +++ b/docs/design/dynamic_rnn/rnn_design.md @@ -62,7 +62,7 @@ public: LODTensor LODSliceShared(int level, int elem_begin, int elem_end) const; // copy other's lod_start_pos_, to share LOD info. - // NOTE the LOD info sould not be changed. + // NOTE the LOD info could not be changed. void ShareConstLODFrom(const LODTensor &other) { lod_start_pos_ = other.lod_start_pos_; } diff --git a/docs/design/dynamic_rnn/rnn_design_en.md b/docs/design/dynamic_rnn/rnn_design_en.md index 31153595f0b..4a011cd7ace 100644 --- a/docs/design/dynamic_rnn/rnn_design_en.md +++ b/docs/design/dynamic_rnn/rnn_design_en.md @@ -53,7 +53,7 @@ public: LODTensor LODSliceShared(int level, int elem_begin, int elem_end) const; // copy other's lod_start_pos_, to share LOD info. - // NOTE the LOD info sould not be changed. + // NOTE the LOD info could not be changed. void ShareConstLODFrom(const LODTensor &other) { lod_start_pos_ = other.lod_start_pos_; } diff --git a/docs/design/mkldnn/int8/QAT/C++.md b/docs/design/mkldnn/int8/QAT/C++.md index 3203e3c5bdd..d47ab61802a 100644 --- a/docs/design/mkldnn/int8/QAT/C++.md +++ b/docs/design/mkldnn/int8/QAT/C++.md @@ -51,7 +51,7 @@ To download other Quant models, set the `QUANT_MODEL_NAME` variable to on of the - `ResNet50_qat_channelwise`, with input/output scales in `fake_quantize_range_abs_max` operators and the `out_threshold` attributes, with weight scales in `fake_channel_wise_dequantize_max_abs` operators -### Model convertion +### Model conversion To run this quantiozation approach, first you need to set `AnalysisConfig` first and use `EnableMkldnnInt8` function that converts fake-quant model to INT8 OneDNN one. Examples: diff --git a/docs/design/motivation/api.md b/docs/design/motivation/api.md index bc222564e3e..87eca5bd72a 100644 --- a/docs/design/motivation/api.md +++ b/docs/design/motivation/api.md @@ -54,7 +54,7 @@ def f(in): return o # Create 3 topologies (subnets), they share parameters because all -# correspoinding layers have the same parameter names. +# corresponding layers have the same parameter names. fA = f(paddle.layer.data(input_name="A")) fB = f(paddle.layer.data(input_name="B")) fQ = f(paddle.layer.data(input_name="Q")) diff --git a/docs/dev_guides/sugon/paddle_c86_cn.md b/docs/dev_guides/sugon/paddle_c86_cn.md index 80d11e1ff72..fa35c4b36d1 100644 --- a/docs/dev_guides/sugon/paddle_c86_cn.md +++ b/docs/dev_guides/sugon/paddle_c86_cn.md @@ -33,7 +33,7 @@ ROCm 软件栈本身具备较高的成熟度与完备性,用户根据 ROCm 提 - 动态库加载: 在 [paddle/phi/backends/dynload](https://github.com/PaddlePaddle/Paddle/tree/develop/paddle/phi/backends/dynload) 目录下动态加载 ROCm 加速库及所需 API,如 [hiprand.h](https://github.com/PaddlePaddle/Paddle/tree/develop/paddle/phi/backends/dynload/hiprand.h) [miopen.h](https://github.com/PaddlePaddle/Paddle/tree/develop/paddle/phi/backends/dynload/miopen.h) [rocblas.h](https://github.com/PaddlePaddle/Paddle/tree/develop/paddle/phi/backends/dynload/rocblas.h)等 - Driver/Runtime 适配:主要在 [paddle/fluid/platform/device/gpu](https://github.com/PaddlePaddle/Paddle/tree/develop/paddle/fluid/platform/device/gpu) 目录下对 HIP 和 CUDA 进行了相关 API 的封装,其中在 [gpu_types.h](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/phi/core/platform/device/gpu/gpu_types.h) 少量封装了部分与 CUDA 差异较小的数据类型定义,部分 ROCm 独有代码位于[paddle/phi/core/platform/device/gpu/rocm](https://github.com/PaddlePaddle/Paddle/tree/develop/paddle/phi/core/platform/device/gpu/rocm)目录 - Memory 管理:利用上一步封装好的 Driver/Runtime API 对 [memcpy.cc](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/phi/core/memory/memcpy.cc#L574) 与 [paddle/phi/core/memory/allocation](https://github.com/PaddlePaddle/Paddle/tree/develop/paddle/phi/core/memory/allocation) 目录下的多种 Memory Allocator 进行实现 - - Device Context 管理:利用封装好的 API 实现对设备上下文的管理及设备池的初始化,位于 [device_contxt.h](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/phi/core/platform/device_context.h) + - Device Context 管理:利用封装好的 API 实现对设备上下文的管理及设备池的初始化,位于 [device_context.h](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/phi/core/platform/device_context.h) - 其他设备管理相关的适配接入,如 Profiler, Tracer, Error Message, NCCL 等,代码主要位于 [Paddle/platform](https://github.com/PaddlePaddle/Paddle/tree/develop/paddle/fluid/platform) 目录下 3. 算子注册:主要包括 HIP Kernel 的算子注册,以及 MIOpen 的算子在 ROCm 平台上的注册 - 数据类型支持:除通用数据类型外,还需适配 Paddle 支持的特殊数据类型包括 [float16.h](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/phi/common/float16.h#L144) [complex.h](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/phi/common/complex.h#L88) [bfloat16.h](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/phi/common/bfloat16.h#L65) 等 diff --git a/docs/guides/model_convert/update_en.md b/docs/guides/model_convert/update_en.md index 9526d05b7a7..8ddb743634c 100644 --- a/docs/guides/model_convert/update_en.md +++ b/docs/guides/model_convert/update_en.md @@ -67,7 +67,7 @@ In order to make the API organization more concise and clear, the original direc ### API alias rule -- APIs are created with aliases in different paths for better convinience: +- APIs are created with aliases in different paths for better convenience: - All APIs under device, framework, and tensor directories are aliased in the paddle root directory; all APIs are not aliased in the paddle root directory except a few special APIs. - All APIs in the paddle.nn directory except for the functional directory have aliases in the paddle.nn directory; all APIs in the functional directory have no aliases in the paddle.nn directory. - ** **It is recommended to give preference to aliases with shorter paths**, for example `paddle.add -> paddle.tensor.add`; `paddle.add` is recommended.