Skip to content

Commit 6bd7936

Browse files
author
cuda-python-bot
committed
Deploy doc preview for PR 1775 (d10ab07)
1 parent f8514ba commit 6bd7936

142 files changed

Lines changed: 3250 additions & 347 deletions

File tree

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.
Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
11
# Sphinx build info version 1
22
# This file records the configuration used when building these files. When it is not found, a full rebuild will be done.
3-
config: bb6f261e8a77d2cead18289efd2c4145
3+
config: 83c58067e36557c95f891999fa170715
44
tags: 645f666f9bcd5a90fca523b33c5a78b7

docs/pr-preview/pr-1775/cuda-core/latest/_sources/api.rst.txt

Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -62,6 +62,21 @@ CUDA runtime
6262
on other non-blocking streams.
6363

6464

65+
.. module:: cuda.core.managed_memory
66+
67+
Managed memory
68+
--------------
69+
70+
.. autosummary::
71+
:toctree: generated/
72+
73+
advise
74+
prefetch
75+
discard_prefetch
76+
77+
.. module:: cuda.core
78+
:no-index:
79+
6580
CUDA compilation toolchain
6681
--------------------------
6782

docs/pr-preview/pr-1775/cuda-core/latest/_sources/generated/cuda.core.Buffer.rst.txt

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -14,16 +14,13 @@ cuda.core.Buffer
1414

1515

1616
.. automethod:: __init__
17-
.. automethod:: advise
1817
.. automethod:: close
1918
.. automethod:: copy_from
2019
.. automethod:: copy_to
21-
.. automethod:: discard_prefetch
2220
.. automethod:: fill
2321
.. automethod:: from_handle
2422
.. automethod:: from_ipc_descriptor
2523
.. automethod:: get_ipc_descriptor
26-
.. automethod:: prefetch
2724

2825

2926

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,6 @@
1+
cuda.core.managed\_memory.advise
2+
================================
3+
4+
.. currentmodule:: cuda.core.managed_memory
5+
6+
.. autofunction:: advise
Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,6 @@
1+
cuda.core.managed\_memory.discard\_prefetch
2+
===========================================
3+
4+
.. currentmodule:: cuda.core.managed_memory
5+
6+
.. autofunction:: discard_prefetch
Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,6 @@
1+
cuda.core.managed\_memory.prefetch
2+
==================================
3+
4+
.. currentmodule:: cuda.core.managed_memory
5+
6+
.. autofunction:: prefetch

docs/pr-preview/pr-1775/cuda-core/latest/_sources/generated/cuda.core.utils.StridedMemoryView.rst.txt

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -14,6 +14,7 @@ cuda.core.utils.StridedMemoryView
1414

1515

1616
.. automethod:: __init__
17+
.. automethod:: as_tensor_map
1718
.. automethod:: copy_from
1819
.. automethod:: copy_to
1920
.. automethod:: from_any_interface

docs/pr-preview/pr-1775/cuda-core/latest/_sources/release/0.7.x-notes.rst.txt

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -35,10 +35,12 @@ New features
3535
preference, or a tuple such as ``("device", 0)``, ``("host", None)``, or
3636
``("host_numa", 3)``.
3737

38-
- Added managed-memory controls on :class:`Buffer`: ``advise()``,
39-
``prefetch()``, and ``discard_prefetch()``. These methods validate that the
40-
underlying allocation is managed memory and then forward to the corresponding
41-
CUDA driver operations for range advice and migration.
38+
- Added managed-memory range operations under :mod:`cuda.core.managed_memory`:
39+
``advise()``, ``prefetch()``, and ``discard_prefetch()``. These free
40+
functions accept either a managed :class:`Buffer` or a raw pointer plus
41+
``size=``, validate that the target allocation is managed memory, and then
42+
forward to the corresponding CUDA driver operations for range advice and
43+
migration.
4244

4345
- Added ``numa_id`` option to :class:`PinnedMemoryResourceOptions` for explicit
4446
control over host NUMA node placement. When ``ipc_enabled=True`` and

docs/pr-preview/pr-1775/cuda-core/latest/_static/documentation_options.js

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
const DOCUMENTATION_OPTIONS = {
2-
VERSION: '0.6.1.dev63',
2+
VERSION: '0.6.1.dev75',
33
LANGUAGE: 'en',
44
COLLAPSE_INDEX: false,
55
BUILDER: 'html',

docs/pr-preview/pr-1775/cuda-core/latest/api.html

Lines changed: 24 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,7 @@
4444

4545

4646

47-
<script src="_static/documentation_options.js?v=520cca20"></script>
47+
<script src="_static/documentation_options.js?v=59320eb0"></script>
4848
<script src="_static/doctools.js?v=9bcbadda"></script>
4949
<script src="_static/sphinx_highlight.js?v=dc90522c"></script>
5050
<script src="_static/clipboard.min.js?v=a7894cd8"></script>
@@ -53,7 +53,7 @@
5353
<script>
5454
DOCUMENTATION_OPTIONS.theme_version = '0.16.1';
5555
DOCUMENTATION_OPTIONS.theme_switcher_json_url = 'https://nvidia.github.io/cuda-python/cuda-core/nv-versions.json';
56-
DOCUMENTATION_OPTIONS.theme_switcher_version_match = '0.6.1.dev63';
56+
DOCUMENTATION_OPTIONS.theme_switcher_version_match = '0.6.1.dev75';
5757
DOCUMENTATION_OPTIONS.show_version_warning_banner =
5858
false;
5959
</script>
@@ -500,6 +500,9 @@
500500
<li class="toctree-l2"><a class="reference internal" href="generated/cuda.core.StreamOptions.html">cuda.core.StreamOptions</a></li>
501501
<li class="toctree-l2"><a class="reference internal" href="generated/cuda.core.LaunchConfig.html">cuda.core.LaunchConfig</a></li>
502502
<li class="toctree-l2"><a class="reference internal" href="generated/cuda.core.VirtualMemoryResourceOptions.html">cuda.core.VirtualMemoryResourceOptions</a></li>
503+
<li class="toctree-l2"><a class="reference internal" href="generated/cuda.core.managed_memory.advise.html">cuda.core.managed_memory.advise</a></li>
504+
<li class="toctree-l2"><a class="reference internal" href="generated/cuda.core.managed_memory.prefetch.html">cuda.core.managed_memory.prefetch</a></li>
505+
<li class="toctree-l2"><a class="reference internal" href="generated/cuda.core.managed_memory.discard_prefetch.html">cuda.core.managed_memory.discard_prefetch</a></li>
503506
<li class="toctree-l2"><a class="reference internal" href="generated/cuda.core.Program.html">cuda.core.Program</a></li>
504507
<li class="toctree-l2"><a class="reference internal" href="generated/cuda.core.Linker.html">cuda.core.Linker</a></li>
505508
<li class="toctree-l2"><a class="reference internal" href="generated/cuda.core.ObjectCode.html">cuda.core.ObjectCode</a></li>
@@ -738,6 +741,23 @@ <h2>CUDA runtime<a class="headerlink" href="#cuda-runtime" title="Link to this h
738741
on other non-blocking streams.</p>
739742
</dd></dl>
740743

744+
</section>
745+
<section id="managed-memory">
746+
<span id="module-cuda.core.managed_memory"></span><h2>Managed memory<a class="headerlink" href="#managed-memory" title="Link to this heading">#</a></h2>
747+
<div class="pst-scrollable-table-container"><table class="autosummary longtable table autosummary">
748+
<tbody>
749+
<tr class="row-odd"><td><p><a class="reference internal" href="generated/cuda.core.managed_memory.advise.html#cuda.core.managed_memory.advise" title="cuda.core.managed_memory.advise"><code class="xref py py-obj docutils literal notranslate"><span class="pre">advise</span></code></a>(target, advice, location, *, ...)</p></td>
750+
<td><p>Apply managed-memory advice to an allocation range.</p></td>
751+
</tr>
752+
<tr class="row-even"><td><p><a class="reference internal" href="generated/cuda.core.managed_memory.prefetch.html#cuda.core.managed_memory.prefetch" title="cuda.core.managed_memory.prefetch"><code class="xref py py-obj docutils literal notranslate"><span class="pre">prefetch</span></code></a>(target, location, *, stream, ...)</p></td>
753+
<td><p>Prefetch a managed-memory allocation range to a target location.</p></td>
754+
</tr>
755+
<tr class="row-odd"><td><p><a class="reference internal" href="generated/cuda.core.managed_memory.discard_prefetch.html#cuda.core.managed_memory.discard_prefetch" title="cuda.core.managed_memory.discard_prefetch"><code class="xref py py-obj docutils literal notranslate"><span class="pre">discard_prefetch</span></code></a>(target, location, *, ...)</p></td>
756+
<td><p>Discard a managed-memory allocation range and prefetch it to a target location.</p></td>
757+
</tr>
758+
</tbody>
759+
</table>
760+
</div>
741761
</section>
742762
<section id="cuda-compilation-toolchain">
743763
<h2>CUDA compilation toolchain<a class="headerlink" href="#cuda-compilation-toolchain" title="Link to this heading">#</a></h2>
@@ -812,7 +832,7 @@ <h2>CUDA system information and NVIDIA Management Library (NVML)<a class="header
812832
<td><p>Representation of a device.</p></td>
813833
</tr>
814834
<tr class="row-odd"><td><p><a class="reference internal" href="generated/cuda.core.system.AddressingMode.html#cuda.core.system.AddressingMode" title="cuda.core.system.AddressingMode"><code class="xref py py-obj docutils literal notranslate"><span class="pre">system.AddressingMode</span></code></a></p></td>
815-
<td><p>alias of <code class="xref py py-class docutils literal notranslate"><span class="pre">DeviceAddressingModeType</span></code></p></td>
835+
<td><p>alias of <a class="reference external" href="https://nvidia.github.io/cuda-python/cuda-bindings/latest/module/generated/cuda.bindings.nvml.DeviceAddressingModeType.html#cuda.bindings.nvml.DeviceAddressingModeType" title="(in cuda.bindings)"><code class="xref py py-class docutils literal notranslate"><span class="pre">DeviceAddressingModeType</span></code></a></p></td>
816836
</tr>
817837
<tr class="row-even"><td><p><a class="reference internal" href="generated/cuda.core.system.AffinityScope.html#cuda.core.system.AffinityScope" title="cuda.core.system.AffinityScope"><code class="xref py py-obj docutils literal notranslate"><span class="pre">system.AffinityScope</span></code></a>(value)</p></td>
818838
<td><p></p></td>
@@ -1008,6 +1028,7 @@ <h2>CUDA system information and NVIDIA Management Library (NVML)<a class="header
10081028
<li class="toc-h3 nav-item toc-entry"><a class="reference internal nav-link" href="#cuda.core.PER_THREAD_DEFAULT_STREAM"><code class="docutils literal notranslate"><span class="pre">PER_THREAD_DEFAULT_STREAM</span></code></a></li>
10091029
</ul>
10101030
</li>
1031+
<li class="toc-h2 nav-item toc-entry"><a class="reference internal nav-link" href="#managed-memory">Managed memory</a></li>
10111032
<li class="toc-h2 nav-item toc-entry"><a class="reference internal nav-link" href="#cuda-compilation-toolchain">CUDA compilation toolchain</a></li>
10121033
<li class="toc-h2 nav-item toc-entry"><a class="reference internal nav-link" href="#cuda-system-information-and-nvidia-management-library-nvml">CUDA system information and NVIDIA Management Library (NVML)</a></li>
10131034
<li class="toc-h2 nav-item toc-entry"><a class="reference internal nav-link" href="#utility-functions">Utility functions</a></li>

0 commit comments

Comments
 (0)