You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
which minimizes a first stage cost function $f(\mathbf{x}_1,
72
-
\mathbf{y}_1)$ and the expected value of future costs over possible
81
+
which minimizes a first stage cost function $c(\mathbf{x}_1,
82
+
\mathbf{u}_1)$ and the expected value of future costs over possible
73
83
values of the exogenous stochastic variable $\{w_{t}\}_{t=2}^{T} \in
74
84
\Omega$.
75
85
76
86
Here, $\mathbf{x}_0$ is the initial system state and the
77
-
control decisions $\mathbf{y}_t$ are obtained at every period $t$
87
+
control decisions $\mathbf{u}_t$ are obtained at every period $t$
78
88
under a feasible region defined by the incoming state
79
89
$\mathbf{x}_{t-1}$ and the realized uncertainty $w_t$. $\mathbf{E}_t$ represents the expected value over future uncertainties $\{w_{\tau}\}_{\tau=t}^{T}$. This
80
90
optimization program assumes that the system is entirely defined by
@@ -87,7 +97,7 @@ constraints can be generally posed as:
Function $V_{t}(\mathbf{x}_{t-1}, w_t)$ is refered to as the value function. To find the optimal policy for the $1^{\text{st}}$ stage, we need to find the optimal policy for the entire horizon $\{t=2,\cdots,T\}$ or at least estimate the "optimal" value function.
134
+
"""
135
+
136
+
# ╔═╡ 60ba261a-f2eb-4b45-ad6d-b6042926ccab
137
+
load(joinpath(class_dir, "indecision_tree.png"))
138
+
139
+
# ╔═╡ 15709f7b-943e-4190-8f40-0cfdb8772183
140
+
md"""
141
+
Notice that the number of "nodes" to be evaluated (either decisions or their cost) grows exponetially with the number of stages. This the the *Curse of dimensionality*
142
+
in stochastic programming.
143
+
144
+
"""
145
+
146
+
# ╔═╡ 5d7a4408-21ff-41ec-b004-4b0a9f04bb4f
147
+
question_box(md"Can you name a few ways to try and/or solve this problem?")
148
+
149
+
# ╔═╡ c08f511e-b91d-4d17-a286-96469c31568a
150
+
md"## Example: Robotic Arm Manipulation"
151
+
152
+
# ╔═╡ b3129bcb-c24a-4faa-a5cf-f69ce518ea87
153
+
load(joinpath(class_dir, "nlp_robot_arm.png"))
154
+
155
+
# ╔═╡ c1f43c8d-0616-4572-bb48-dbb71e40adda
156
+
md"""
157
+
The tip of the second link is computed using the direct geometric model:
158
+
159
+
```math
160
+
p(\theta_{1},\theta_{2}) \;=\;
161
+
\begin{cases}
162
+
x = L_{1}\,\sin\theta_{1} \;+\; L_{2}\,\sin\!\bigl(\theta_{1}+\theta_{2}\bigr),\\[6pt]
163
+
y = L_{1}\,\cos\theta_{1} \;+\; L_{2}\,\cos\!\bigl(\theta_{1}+\theta_{2}\bigr).
@@ -361,16 +463,41 @@ Foldable(md"All mechanical systems can be written this way. Why?", md"""
361
463
362
464
Manipulator Dynamics Equations are a way of rewriting the Euler--Lagrange equations.
363
465
364
-
> In the calculus of variations and classical mechanics, the Euler–Lagrange equations are a system of second-order ordinary differential equations whose solutions are stationary points of the given action functional. The equations were discovered in the 1750s by Swiss mathematician Leonhard Euler and Italian mathematician Joseph-Louis Lagrange.
466
+
#### 🚀 Detour: The Principle of Least Action 🚀
365
467
468
+
In the calculus of variations and classical mechanics, the Euler–Lagrange equations are a system of second-order ordinary differential equations whose solutions are stationary points of the given action functional.
469
+
470
+
> The equations were discovered in the 1750s by Swiss mathematician Leonhard Euler and Italian mathematician Joseph-Louis Lagrange.
471
+
472
+
In classical mechanics:
473
+
366
474
```math
367
475
L = \underbrace{\frac{1}{2} v^{\top}M(q)v}_{\text{Kinematic Energy}} - \underbrace{U(q)}_{\text{Potential Energy}}
368
476
```
369
477
370
-
What can you say about $M(q)$? When do we have a problem inverting it?
478
+
A curve ($q^\star(t)$) is physically realised iff it is a stationary
0 commit comments