
Here's something encrypted, password is required to continue reading.
Read more7/1:♥️糖醋羊排CP成立!!♥️
爱心屋签到: aixinwu.sjtu.edu.cn/products/asw-store
每日二GRISSO💊
Starting from 5/12:
Time | Monday | Tuesday | Wednesday | Thursday | Friday | Saturday | Sunday |
---|---|---|---|---|---|---|---|
08:00 | |||||||
09:00 | byy组会 | ||||||
10:00 | ECE7606 DZY3-306 | ECE7606 DZY3-306 | |||||
11:00 | |||||||
12:00 | |||||||
13:00 | 490组会 | ||||||
14:00 | ENGR6002 DXY209 | ||||||
15:00 | |||||||
16:00 | MATH6003 DXY215 | MATH6003 DXY302 | |||||
17:00 | |||||||
18:00 | |||||||
19:00 | |||||||
20:00 | |||||||
21:00 | |||||||
22:00 | |||||||
Credits: 3(MATH)+3(ECE)+1(TC) |
For the figure, the each step corresponds to a processed batch(16).
1 | python train.py --log_name resdcn18_all \ |
1 | python train.py --log_name resdcn50_all \ |
1 | python train.py --log_name resdcn_all \ |
1 | python train.py --log_name resdcn_all \ |
1 | python train.py --log_name hrnet_all \ |
1 | python train.py --log_name hg_all \ |
Course: VV570/MATH6003J: Introduction to Engineering Numerical Analysis 1
1. Introduction & Problem Statement
In robotic perception, sensors like LiDAR capture the environment as a “point cloud”—a set of data points in space. To enable tasks like mapping or localization, a robot must align multiple point clouds taken from different positions. This project will address this fundamental robotics problem by implementing the Iterative Closest Point (ICP) algorithm, a numerical optimization method used to find the optimal alignment between two point clouds2.
2. Objective
The primary objective of this project is to develop a working 2D implementation of the ICP algorithm. We will apply numerical analysis techniques to find the rigid transformation (rotation and translation) that minimizes the distance between two point clouds, demonstrating a practical application of concepts from the course to the field of robotics.
3. Methodology
Our approach will be to apply a computational tool to solve this problem3. We will use Python with the NumPy library for numerical computations and Matplotlib for visualization. The core steps of our implementation will be:
Data Association: For each point in a source cloud, find the closest corresponding point in the target cloud.
Numerical Optimization: Calculate the optimal rotation and translation that minimizes the mean squared error between the corresponding point pairs. This will be solved using Singular Value Decomposition (SVD).
Iterative Refinement: Apply the calculated transformation to the source cloud and repeat the process until the alignment error converges below a set threshold.
We will start with the simplification of working in 2D and assume a reasonable initial alignment between the point clouds4.
4. Expected Outcomes & Justification of Results
By the end of this three-week project5, we will deliver:
A Python implementation of the 2D ICP algorithm.
Visualizations demonstrating the algorithm’s effectiveness by showing the point clouds before and after alignment.
A plot of the alignment error over iterations, which will serve to justify our results by showing the convergence of the numerical method6.
Zhichen Tang
124370910020
In this first part, I looked at the logistic model for population growth. The main goal was to solve the equation $u’(t) = C u(t) (1 - u(t)/B)$ using a couple of different numerical methods. The parameters were set to $u_0=100$, $B=1000$, and $C=2/15$.
I take the current value and add the slope at that point multiplied by the time step to get the next value.
1 | % Completed feuler.m |
% Completed heun.m
function [t, u, dt] = heun( f, I, u0, N )
dt = (I(2)-I(1))/N;
t = linspace(I(1),I(2),N+1);
u = zeros(1,N+1);
u(1) = u0;
for n = 1:N
u_predictor = u(n) + dt * f(t(n), u(n));
u(n+1) = u(n) + (dt/2) * ( f(t(n), u(n)) + f(t(n+1), u_predictor) );
end
end
<div class="post-content"><img src="/2025/07/14/[OBS]课程-Math6003 Hw5/ex1b.png" alt="" title=""></div>
### c)
After implementing the methods, I ran them with a coarse time step (Deltat=5, corresponding to N=20) and plotted them against the exact solution.
It's pretty clear from the graph that the Heun method does a much better job. Its curve hugs the exact solution line much more tightly than the Forward Euler curve does. This makes sense because it's a higher-order method, so you'd expect it to be more accurate for the same step size.
<div class="post-content"><img src="/2025/07/14/[OBS]课程-Math6003 Hw5/ex1c.png" alt="" title=""></div>
### d)
Next, I repeated the comparison but with a much smaller time step (Deltat=0.05, or N=2000).
Absolutely. With the smaller time step, both methods improved dramatically. The plot shows that the lines for the Euler method, Heun's method, and the exact solution are basically right on top of each other. This really shows how decreasing the step size can significantly reduce the error in numerical solutions. However, the Heun method's absolute error is still lower than the Euler method.
<div class="post-content"><img src="/2025/07/14/[OBS]课程-Math6003 Hw5/ex1d.png" alt="" title=""></div>
---
## Exercise 2
### a)
To solve this, I first had to convert the single 2nd-order equation into a system of two 1st-order equations. I defined a new variable, V(t)=U′(t). This gives the first equation of the system.
Then, I rearranged the original RLC equation to isolate U′′(t):
$$ U''(t) = -\frac{R}{L}U'(t) - \frac{1}{LC}U(t) + \frac{f}{LC} $$
By substituting V for U′ and V′ for U′′, I got the second equation. This allowed me to write the whole system in matrix form, X′=AX+b:
$$\begin{pmatrix} V' \\ U' \end{pmatrix} = \begin{pmatrix} -R/L & -1/(LC) \\ 1 & 0 \end{pmatrix} \begin{pmatrix} V \\ U \end{pmatrix} + \begin{pmatrix} f/(LC) \\ 0 \end{pmatrix} $$
### b)
Stability Analysis of Forward Euler The Forward Euler method is only stable if $|1 + \Delta t \lambda_i| \le 1$ for all eigenvalues ($\lambda_i$) of the matrix A. Using the given component values (L=0.01, C=10, R=0.1), the matrix A becomes: $$ A = \begin{pmatrix} -10 & -10 \\ 1 & 0 \end{pmatrix} $$ I calculated the eigenvalues to be $\lambda = -5 \pm \sqrt{15}$. To ensure stability, the time step $\Delta t$ has to be less than a critical value determined by the eigenvalue with the largest magnitude. The calculation showed that the method is stable only if: $$ \Delta t \le \frac{2}{5 + \sqrt{15}} \approx 0.2254 \text{ s} $$
### c)
Simulating with Forward Euler I ran simulations for three different time steps. The results were a perfect illustration of the stability condition.
<div class="post-content"><img src="/2025/07/14/[OBS]课程-Math6003 Hw5/ex2c.png" alt="" title=""></div>
As you can see: - With **N=43** ($\Delta t \approx 0.233$), the time step was too large, and the solution completely blew up. - With **N=46** ($\Delta t \approx 0.217$), the time step was just inside the stable range, and the solution correctly showed a damped wave. - With **N=500** ($\Delta t = 0.02$), the solution was also stable and much smoother.
### d)
Simulating with Backward Euler Finally, I repeated the same simulations using the Backward Euler method.
<div class="post-content"><img src="/2025/07/14/[OBS]课程-Math6003 Hw5/ex2d.png" alt="" title=""></div>
The difference is night and day. The Backward Euler method was **stable for all three time steps**, even the large one that made the Forward Euler method fail. This shows that implicit methods like Backward Euler are much more robust for systems like this, as they don't have the same strict stability requirements on the time step. While the accuracy still gets better with smaller steps, you can trust it to give a stable answer no matter what.
Use SSH to Connect TensorboardX
使用ssh作为命令行远程工具,启动远程的tensorboardx并且在本地的浏览器中打开。
远程运行:
1 | tensorboard --logdir <path> --port 6006 |
本地运行:
1 | ssh -N -f -L localhost:16006:localhost:6006 bohan@10.11.16.146 |
The result of hw4_1_a.m
:
Vandermonde matrices are known to be ill-conditioned, with the condition number growing exponentially with $n$.
This tridiagonal matrix is well-conditioned. Its condition number grows polynomially with $n$ (like $O(n^2)$). The error $\epsilon_n$ should grow much more slowly than in the Vandermonde case, exhibiting polynomial rather than exponential growth. The residual $r_n$ will be a better, though not perfect, indicator of the true error compared to the previous case.
Comments on the obtained results:
loglog
plot.The task is to complete the files $jacobi.m$ and $gauss_seidel.m$. These files implement iterative methods based on a matrix splitting $A = P - (P-A)$, where $P$ is the preconditioner. The iterative update can be expressed as $x^{(k+1)} = x^{(k)} + z^{(k)}$, where $z^{(k)}$ is the solution to $Pz^{(k)} = r^{(k)}$, and $r^{(k)} = b - Ax^{(k)}$ is the residual.
For the $\textbf{Jacobi method}$, the preconditioner P
is the diagonal of A
. For the $\textbf{Gauss-Seidel method}$, P
is the lower triangular part of A
. The missing lines implement the calculation of the residual, the update step, and the storing of the residual norm at each iteration.
jacobi.m
:
1 | function [x, iter, res]= jacobi(A,b,x0,nmax,tol) |
gauss_seidel.m
:
1 | function [x, iter, res]= gauss_seidel(A,b,x0,nmax,tol) |
I solve the system for $n=4$ with the tridiagonal matrix $A$ having $2.5$ on the diagonal and $-1$ on the off-diagonals. The right-hand side is $b=(1.5,0.5,0.5,1.5)^{\top}$. The initial guess is $x^{(0)}=0$, tolerance is $10^{-10}$, and max iterations is $10^8$. I compare the number of iterations and computing time.
The matrix $A$ is strictly diagonally dominant since $|a_{ii}| = 2.5 > \sum_{j \neq i} |a_{ij}|$ for all rows. This property guarantees that both methods will converge.
Guass-Seidek converges faster. ### c The analysis is repeated for the same matrix family with $n$ varying from 4 to 200. The exact solution is $x=(1,...,1)^{\top}$. I plot the number of iterations, the relative error $||x-x_{c}||/||x||$, and the normalized residual $||b-Ax_{c}||/||b||$ versus $n$. I also examine how the condition number of A changes with n.For this well-conditioned matrix (as shown in part e), the number of iterations grows very slowly with $n$. The condition number remains small and constant. The relative error and the normalized residual are very close in value, indicating that the residual is a good estimator of the error.
The analysis from (c) is repeated for the matrix $A$ with 2 on the diagonal and -1 on the off-diagonals , and $b=(1,0,…,0,1)^{\top}$. The exact solution is again $x=(1,…,1)^{\top}$. This matrix is only weakly diagonally dominant, so I expect slower convergence.
The number of iterations will grow much more rapidly with $n$. This is because, as shown in part (e), the condition number of this matrix grows quadratically with $n$, making the problem progressively harder to solve.
I use the given formula for the eigenvalues of a symmetric tridiagonal matrix $M$ with $a$ on the diagonal and $b$ on the off-diagonals:
$$ \lambda_{i}(M)=a+2b\cos\left(\frac{\pi i}{n+1}\right), \quad i=1,…,n $$
The condition number is $\kappa_2(A) = |\lambda_{max}|/|\lambda_{min}|$.
Case 1: Matrix from part (c)
Here, $a=2.5$ and $b=-1$. The eigenvalues are $\lambda_i = 2.5 - 2\cos(\frac{\pi i}{n+1})$.
The condition number $\kappa(A)$ approaches a constant, $\kappa(A) \to 4.5 / 0.5 = 9$. This is consistent with the numerical results from part (c), explaining the fast and stable convergence.
Case 2: Matrix from part (d)
Here, $a=2$ and $b=-1$. The eigenvalues are $\lambda_i = 2 - 2\cos(\frac{\pi i}{n+1}) = 4\sin^2(\frac{\pi i}{2(n+1)})$.
The objective is to find the coefficients $(\alpha, \beta, \gamma)$ for the finite difference formula:
$$
D_{h}f(\overline{x})=\frac{\alpha f(\overline{x})+\beta f(\overline{x}-h)+\gamma f(\overline{x}-2h)}{h} \quad
$$
The analysis begins by substituting the Taylor series expansions for $f(\overline{x}-h)$ and $f(\overline{x}-2h)$ around the point $\overline{x}$ into the formula.
The expansions are:
$$
f(\overline{x}-h) = f(\overline{x}) - hf’(\overline{x}) + \frac{h^2}{2}f’’(\overline{x}) - \frac{h^3}{6}f’’’(\overline{x}) + O(h^4)
$$
$$
f(\overline{x}-2h) = f(\overline{x}) - 2hf’(\overline{x}) + 2h^2f’’(\overline{x}) - \frac{4h^3}{3}f’’’(\overline{x}) + O(h^4)
$$
Substituting these into the formula and grouping terms by derivatives of $f(\overline{x})$ results in:
$$
D_{h}f(\overline{x}) = \frac{\alpha + \beta + \gamma}{h} f(\overline{x}) + (-\beta - 2\gamma) f’(\overline{x}) + \left( \frac{\beta}{2} + 2\gamma \right)h f’’(\overline{x}) + O(h^2)
$$
To approximate $f’(\overline{x})$, the coefficient of $f’(\overline{x})$ must be $1$, and the coefficient of $f(\overline{x})$ must be $0$. This yields two necessary equations:
For the formula to be of order 1, the two fundamental equations must be satisfied.
From equation (2), $\beta$ can be expressed in terms of $\gamma$:
$$
\beta = -1 - 2\gamma
$$
Substituting this into equation (1):
$$
\alpha + (-1 - 2\gamma) + \gamma = 0 \implies \alpha = 1 + \gamma
$$
Therefore, the family of coefficients $(\alpha, \beta, \gamma)$ that provides at least first-order accuracy is given by:
$$
(\alpha, \beta, \gamma) = (1+\gamma, -1-2\gamma, \gamma), \quad \forall \gamma \in \mathbb{R}
$$
To achieve second-order accuracy, the truncation error must be $O(h^2)$. This requires the coefficient of the $h$ term in the error expansion to also be zero.
$$\frac{\beta}{2} + 2\gamma = 0$$
This creates a system of three linear equations to be solved for the unique values of $(\alpha, \beta, \gamma)$:
From equation (3), we find that $\beta = -4\gamma$. Substituting this into equation (2):
$$
-(-4\gamma) - 2\gamma = 1 \implies 2\gamma = 1 \implies \gamma = \frac{1}{2}
$$
With the value for $\gamma$, $\beta$ can be found:
$$
\beta = -4 \left( \frac{1}{2} \right) = -2
$$
Finally, substituting $\beta$ and $\gamma$ into equation (1) yields $\alpha$:
$$
\alpha + (-2) + \frac{1}{2} = 0 \implies \alpha = \frac{3}{2}
$$
The unique values which give a formula of order 2 are:
$$
\left(\alpha, \beta, \gamma\right) = \left(\frac{3}{2}, -2, \frac{1}{2}\right)
$$
The first task is to compute the integral of the function $f(x)=e^{-x}sin(x)$ on the interval $[a,b]=[0,2]$. The computation uses the composite midpoint formula with $M=10$ sub-intervals.
The resulting numerical approximation of the integral is 0.468592. The exact integral is 0.466630. So the error is 0.001962.
This section analyzes the convergence of the composite midpoint formula for the same function and interval. The number of sub-intervals is varied as $M=10^{1},10^{2},10^{3},…10^{5}$.
The absolute error, $|I(f)-Q_{h}^{pm}(f)|$, is computed for each corresponding step size, $h=(b-a)/M$. This error is the difference between the numerical result and the exact value of the integral, $I=(1-e^{-2}(sin(2)+cos(2)))/2$.
A graph of the error versus the step size $h$ is then created using a logarithmic scale for both axes. In such a plot, the slope of the resulting line corresponds to the order of convergence.
**Plot Analysis**The plot displays a straight line with a negative slope. A linear fit to the logarithmic data reveals a slope of approximately 2. This visually confirms that the method exhibits 2nd-order convergence, which aligns with the theoretical error bounds for the composite midpoint rule.
The analysis is repeated using the composite trapezoidal formula and the composite Simpson’s formula. The errors for both methods, $|I(f)-Q_{h}^{trap}(f)|$ and $|I(f)-Q_{h}^{simp}(f)|$, are plotted on the same graph in logarithmic scale.
**Plot Analysis and Comparison**The resulting graph contains two distinct straight lines, one for each method.
A comparison of the results shows that for any given step size $h$, the error from Simpson’s rule is substantially smaller than the error from the trapezoidal rule, highlighting the superior accuracy of the higher-order method for smooth functions.
Finally, the convergence analysis is repeated for all three methods on a new function, $f(x)=\sqrt{|x|^{3}}$, over the interval $[a,b]=[-2,2]$. The exact value for this integral is $I=\frac{16}{5}\sqrt{2}$.
Comments on the Obtained Results
The function $f(x)=\sqrt{|x|^{3}}$ is not sufficiently smooth at the point $x=0$ within the integration interval. While the function and its first derivative are continuous, its second derivative, $f’’(x) \propto |x|^{-1/2}$, is unbounded at $x=0$.
The theoretical error estimates that predict 2nd and 4th-order convergence for these methods are based on the assumption that the function’s higher-order derivatives are continuous and bounded. Since this condition is violated, a degradation in the observed convergence rates is expected.
**Plot Analysis** The log-log plot for this non-smooth function reveals the following: * The convergence rates for all three methods are significantly reduced. * The slopes of the error lines for the midpoint, trapezoidal, and Simpson's rules are all approximately **1.5**. * The high-order accuracy advantage of Simpson's rule is lost; its performance becomes comparable to the other two lower-order methods. This result demonstrates that the practical performance and convergence rate of a numerical integration method are fundamentally limited by the smoothness of the function being integrated.