Skip to content

Reduce redundant CUDA Jacobian uploads during a linear solve#2806

Draft
LwhJesse wants to merge 1 commit intosu2code:developfrom
LwhJesse:perf/gpu-single-upload-pr
Draft

Reduce redundant CUDA Jacobian uploads during a linear solve#2806
LwhJesse wants to merge 1 commit intosu2code:developfrom
LwhJesse:perf/gpu-single-upload-pr

Conversation

@LwhJesse
Copy link
Copy Markdown

Proposed Changes

This draft PR reduces redundant CUDA Jacobian uploads in the CUDA matrix-vector product path.

Currently, the CUDA matvec path uploads the Jacobian from host to device inside each GPUMatrixVectorProduct() call. This can repeatedly transfer the same matrix during a single linear solve.

This patch moves the Jacobian upload to the beginning of the linear solve path, so the device-side matrix can be reused across repeated matvec calls within the same solve. The change assumes that the Jacobian remains unchanged during a single linear solve.

The implementation is intentionally small:

  • remove the per-matvec HtDTransfer() call from CSysMatrixGPU.cu;
  • upload the Jacobian once before the linear solve in CSysSolve.cpp;
  • add the same upload for the Newton integration path in CNewtonIntegration.hpp.

Preliminary local CUDA benchmarks on an RTX 4060 Laptop GPU show speedups across 6 cases:

Case Baseline Patched Speedup
periodic2d_sector 1.827456 s 1.714423 s 1.066x
udf_lam_flatplate_s 4.584672 s 3.245981 s 1.412x
udf_lam_flatplate_m 22.757893 s 17.007683 s 1.338x
udf_lam_flatplate_l 38.809395 s 29.876664 s 1.299x
udf_test_11_probes_s 2.993813 s 2.375505 s 1.260x
udf_test_11_probes_m 15.922803 s 12.195998 s 1.306x

Geometric mean speedup: approximately 1.275x.

Nsight Systems indicates that the speedup mainly comes from reduced Host-to-Device memcpy time. Nsight Compute shows that the GPUMatrixVectorProductAdd kernel itself is essentially unchanged, which is expected because this patch does not modify the kernel implementation.

Draft Status

This PR is still a draft because I still need to:

  • review whether every CUDA matvec entry path now has a valid prior matrix upload;
  • re-run final validation on the latest develop;
  • verify that CPU / non-CUDA paths are unaffected;
  • check whether this is the preferred abstraction boundary for matrix upload lifetime management.

Related Work

None.

PR Checklist

  • I am submitting my contribution to the develop branch.
  • My contribution generates no new compiler warnings (try with --warnlevel=3 when using meson).
  • My contribution is commented and consistent with SU2 style (https://su2code.github.io/docs_v7/Style-Guide/).
  • I used the pre-commit hook to prevent dirty commits and used pre-commit run --all to format old commits.
  • I have added a test case that demonstrates my contribution, if necessary.
  • I have updated appropriate documentation (Tutorials, Docs Page, config_template.cpp), if necessary.

Comment on lines +1467 to 1470
#ifdef HAVE_CUDA
if (config->GetCUDA()) Jacobian.HtDTransfer();
#endif
auto mat_vec = CSysMatrixVectorProduct<ScalarType>(Jacobian, geometry, config);
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems we could make this part of CSysMatrixVectorProduct to handle all cases.

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point, I agree.

I will revise this so the CUDA matrix upload is handled inside CSysMatrixVectorProduct, rather than requiring each caller to do it explicitly before constructing the matvec wrapper. Then GPUMatrixVectorProduct() can stay free of the per-matvec matrix upload, while the device-side matrix is reused across repeated operator() calls.

I will also remove the explicit HtDTransfer() calls from CSysSolve.cpp and CNewtonIntegration.hpp, check the other CSysMatrixVectorProduct construction paths, and re-run CUDA/non-CUDA tests before marking this ready for review.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants