Skip to content

Add training lessons and a bigram token-target bridge#5

Merged
hghalebi merged 2 commits into
mainfrom
rd/add-training-module
Apr 28, 2026
Merged

Add training lessons and a bigram token-target bridge#5
hghalebi merged 2 commits into
mainfrom
rd/add-training-module

Conversation

@hghalebi
Copy link
Copy Markdown
Owner

@hghalebi hghalebi commented Apr 28, 2026

Summary

  • author the learning module as a real training bridge with loss, gradients, backprop, optimizer, dataset, and token-target coverage
  • add a runnable code/neuron bigram next-token model using token -> embedding -> lm_head -> logits -> cross-entropy
  • extend the course docs to explain why full Transformer training is the point where autograd becomes the right tool

Testing

  • python3 scripts/check_course_content.py
  • python3 scripts/check_lesson_rust_snippets.py
  • cargo fmt --manifest-path code/neuron/Cargo.toml --check
  • cargo clippy --manifest-path code/neuron/Cargo.toml --all-targets --all-features
  • cargo test --manifest-path code/neuron/Cargo.toml
  • cargo fmt --manifest-path code/transformer/Cargo.toml --check
  • cargo clippy --manifest-path code/transformer/Cargo.toml --all-targets --all-features
  • cargo test --manifest-path code/transformer/Cargo.toml

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 28, 2026

Warning

Rate limit exceeded

@hghalebi has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 57 minutes and 49 seconds before requesting another review.

To keep reviews running without waiting, you can enable usage-based add-on for your organization. This allows additional reviews beyond the hourly cap. Account admins can enable it under billing.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: 8767a36d-6e9f-42b3-abe3-ab51ea5bf1bd

📥 Commits

Reviewing files that changed from the base of the PR and between b348d9f and 175bd8e.

📒 Files selected for processing (28)
  • .github/workflows/ci.yml
  • .gitignore
  • README.md
  • code/README.md
  • code/neuron/Cargo.toml
  • code/neuron/README.md
  • code/neuron/examples/token_targets.rs
  • code/neuron/examples/train_bigram_cycle.rs
  • code/neuron/examples/train_or_gate.rs
  • code/neuron/src/bigram.rs
  • code/neuron/src/dataset.rs
  • code/neuron/src/lib.rs
  • code/neuron/src/neuron.rs
  • code/neuron/src/optimizer.rs
  • code/neuron/src/token_targets.rs
  • lessons/03-neuron/01-rust-essentials-for-a-tiny-neuron.md
  • lessons/03-neuron/02-neuron-as-a-chain-of-functions.md
  • lessons/04-learning/01-one-training-step-end-to-end.md
  • lessons/04-learning/02-backpropagation-as-local-gradient-bookkeeping.md
  • lessons/04-learning/03-datasets-epochs-and-token-targets.md
  • lessons/04-learning/README.md
  • lessons/04-learning/exercises.md
  • lessons/04-learning/solutions.md
  • lessons/07-transformer/README.md
  • lessons/COURSE-STRUCTURE.md
  • lessons/README.md
  • scripts/check_course_content.py
  • scripts/check_lesson_rust_snippets.py
✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch rd/add-training-module

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@hghalebi hghalebi changed the title Add neuron training lessons and crate checks Add training lessons and a bigram token-target bridge Apr 28, 2026
@hghalebi hghalebi merged commit 8fb3f09 into main Apr 28, 2026
5 of 6 checks passed
Copy link
Copy Markdown

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request implements Module 04 ('Learning') by adding new lesson files and the neuron Rust crate, which provides a single-neuron training system with manual backpropagation and SGD. Documentation and CI scripts were updated to integrate the new module. Feedback identifies a numbering error in the README, recommends reverting to a stable Rust edition, and suggests refactoring training functions to reduce code duplication.

Comment thread code/neuron/Cargo.toml
[package]
name = "rust_ml_neuron"
version = "0.1.0"
edition = "2024"
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The Rust 2024 edition is not yet stable and using it may cause compatibility issues with stable Rust toolchains. It's recommended to use the latest stable edition, which is 2021, to ensure the crate can be built by a wider range of users and CI environments.

Suggested change
edition = "2024"
edition = "2021"

Comment thread README.md
3. Continue with [03 Neuron](lessons/03-neuron/README.md).
4. Use [Lessons index](lessons/README.md) to see the full course map and the roadmap modules.
4. Continue with [04 Learning](lessons/04-learning/README.md).
5. Use [Lessons index](lessons/README.md) to see the full course map and the roadmap modules.
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The numbered list has a duplicate 4.. This should be 5. to maintain the correct sequence.

Suggested change
5. Use [Lessons index](lessons/README.md) to see the full course map and the roadmap modules.
5. Use [Lessons index](lessons/README.md) to see the full course map and the roadmap modules.

Comment on lines +48 to +83
/// Trains the neuron for one full pass over the dataset.
pub fn train_epoch(neuron: &mut Neuron, dataset: &Dataset, optimizer: Sgd) -> EpochMetrics {
for example in dataset.iter() {
let gradients = neuron.backward(*example);
optimizer.apply(neuron, gradients);
}

EpochMetrics {
epoch: 1,
average_loss: average_loss(neuron, dataset),
}
}

/// Trains the neuron for a fixed number of epochs and records the average loss.
pub fn train_epochs(
neuron: &mut Neuron,
dataset: &Dataset,
optimizer: Sgd,
epochs: usize,
) -> Vec<EpochMetrics> {
let mut metrics = Vec::with_capacity(epochs);

for epoch_index in 0..epochs {
for example in dataset.iter() {
let gradients = neuron.backward(*example);
optimizer.apply(neuron, gradients);
}

metrics.push(EpochMetrics {
epoch: epoch_index + 1,
average_loss: average_loss(neuron, dataset),
});
}

metrics
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The functions train_epoch and train_epochs contain duplicate logic for iterating over a dataset for one epoch. This can be extracted into a private helper function to follow the DRY (Don't Repeat Yourself) principle, improving maintainability.

Additionally, train_epochs can be written more concisely using iterators and map.

fn run_one_epoch(neuron: &mut Neuron, dataset: &Dataset, optimizer: Sgd) {
    for example in dataset.iter() {
        let gradients = neuron.backward(*example);
        optimizer.apply(neuron, gradients);
    }
}

/// Trains the neuron for one full pass over the dataset.
pub fn train_epoch(neuron: &mut Neuron, dataset: &Dataset, optimizer: Sgd) -> EpochMetrics {
    run_one_epoch(neuron, dataset, optimizer);
    EpochMetrics {
        epoch: 1,
        average_loss: average_loss(neuron, dataset),
    }
}

/// Trains the neuron for a fixed number of epochs and records the average loss.
pub fn train_epochs(
    neuron: &mut Neuron,
    dataset: &Dataset,
    optimizer: Sgd,
    epochs: usize,
) -> Vec<EpochMetrics> {
    (1..=epochs)
        .map(|epoch| {
            run_one_epoch(neuron, dataset, optimizer);
            EpochMetrics {
                epoch,
                average_loss: average_loss(neuron, dataset),
            }
        })
        .collect()
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant