Full language support for Scatter - a distributed computing language with automatic scope-based execution, privacy annotations, multi-tier deployment, and comprehensive ML/IO libraries.
Full syntax highlighting for all Scatter language constructs:
- Keywords:
func,type,struct,const,mut,distribute,capabilities,if,else,for,while,when,defer,return,match,import,scopegroup, etc. - Built-in Types:
int,float,string,bool,byte,Owned,Option,Result,Map,Pair - ML Types:
Tensor,Shape,DType,GGMLTensor,GGUFModel,DistributedLLM,TensorShard - IO Types:
File,FileInfo,DirEntry,FileResult - Annotations:
@scopes,@scope,@privacy,@parallel,@owned,@replicated,@requirements,@gpu,@capability,@bridge, etc. - Operators: Standard operators plus
++,--,^,|>(stream pipe) - Number Formats: Decimal, hex (
0x), binary (0b), octal (0o) - Comments: Line (
#) and block (/* */)
Quickly scaffold common patterns with 30+ snippets:
| Prefix | Description |
|---|---|
main |
Main function |
func |
Function declaration |
method |
Method with receiver |
struct |
Struct type definition |
const |
Constant declaration |
mut |
Mutable variable declaration |
defer |
Defer statement |
when |
Conditional expression |
typeg |
Generic struct type |
funcg |
Generic function |
if, ife |
If and if-else statements |
forr, fori |
For loops (range and iteration) |
while |
While loop |
match |
Match statement |
scopegroup |
Scope group definition |
| Prefix | Description |
|---|---|
@scopes |
Scope annotation |
@privacy |
Privacy annotation |
@parallel |
Parallel annotation |
@owned |
Owned annotation |
@replicated |
Replicated annotation |
@test |
Test annotation |
@capability |
Capability requirement annotation |
@bridge |
Native code bridge annotation |
| Prefix | Description |
|---|---|
mltensor |
Create ML tensors (zeros, ones, tensor) |
mlmatmul |
Matrix multiplication example |
mldistributed |
Distributed LLM inference |
mlshard |
Model layer sharding |
mlggml |
GGML context and operations |
mlcache |
Model caching with IO |
| Prefix | Description |
|---|---|
iofile |
Basic file read/write operations |
iotemp |
Temporary file operations |
iodir |
Directory operations |
iostream |
File streaming (large files) |
iomodel |
Safe model loading with checks |
iopath |
Path manipulation operations |
| Prefix | Description |
|---|---|
test |
Test function |
parfunc |
Parallel function template |
service |
Distributed service template |
Comprehensive autocompletion for:
- Core Language: All keywords, built-in functions, and types
- ML Library (60+ functions):
- Tensor operations:
tensor(),zeros(),matmul(),relu(),softmax() - Distributed ops:
shardTensor(),distributedMatmul(),allReduceSum() - GGML integration:
ggmlInit(),loadGGUF(),ggmlQuantize() - LLM inference:
loadDistributedLLM(),generate(),tokenize()
- Tensor operations:
- IO Library (35+ functions):
- File ops:
readFile(),writeFile(),exists(),deleteFile() - Temp files:
createTempFile(),getTempDir(),cleanTempFiles() - Paths:
joinPath(),basename(),dirname(),extension() - Directories:
listDir(),createDirAll(),removeDirAll() - Streaming:
openRead(),readLine(),write(),close()
- File ops:
Hover over any function, type, or variable to see:
- Function signatures with parameter types
- Type definitions with field information
- Scope requirements and accessibility
- Privacy and ownership annotations
- Detailed documentation
Jump to the definition of:
- Functions and methods
- Types and structs
- Variables and constants
- Scope groups
Real-time parameter hints while typing function calls:
- Shows function signature as you type
( - Highlights current parameter as you type
- Updates on
,to show next parameter - Works with all built-in, ML, and IO functions
Example:
matmul(|) # Shows: func matmul(a Tensor, b Tensor) Tensor
Smart suggestions and automatic fixes:
Quick Fixes:
Add @scopes annotation- Add missing scope constraintsWrap with own()- Wrap return values for @owned typesAdd defer close()- Auto-close file handlesImport ml.tensor- Auto-import ML modulesImport io.files- Auto-import IO modules
Refactorings:
Convert to @parallel function- Make function parallelAdd distributed annotations- Convert to distributed execution
Automatic code formatting (Format Document or on save):
- Consistent indentation (2 spaces or tabs)
- Proper bracket alignment
- Standard code style
- Preserves comments
Enable auto-format on save in VSCode settings:
"[scatter]": {
"editor.formatOnSave": true
}Inline type annotations and parameter names displayed in your code:
Type Hints:
result := matmul(a, b) # Shows: result: Tensor
model := loadGGUF("path.gguf") # Shows: model: GGUFModel
file := openRead("data.txt") # Shows: file: File
count := len(array) # Shows: count: int
Parameter Hints:
tensor(data: [1.0, 2.0], dims: [2, 1]) # Parameter names shown inline
matmul(a: tensor1, b: tensor2)
generate(llm: model, prompt: "Hello")
Enable/disable in VSCode settings:
"editor.inlayHints.enabled": "on" // or "off", "onUnlessPressed"Advanced syntax coloring based on semantic analysis:
- Functions - Different colors for declarations vs calls
- Types - Distinguish built-in vs user-defined types
- ML/IO Types - Special highlighting for library types (Tensor, File, etc.)
- Annotations - Highlight scope and privacy annotations
- Variables - Different colors for parameters, locals, and globals
- Scope-based - Visual indication of scope violations
Find every usage of a symbol across your workspace:
- Right-click on any function, type, or variable
- Select "Find All References" (Shift+F12)
- See all locations where the symbol is used
- Jump to any reference with a single click
Example use cases:
- Find all calls to a function
- See where a type is used
- Track variable usage across files
Rename variables, functions, or types everywhere at once:
- Right-click on a symbol
- Select "Rename Symbol" (F2)
- Type the new name
- All occurrences are updated automatically
Safe refactoring:
- Updates all references
- Preserves code structure
- Works across the entire file
- Maintains comments
Visual reference counts displayed above functions and types:
- Shows "X references" above each function declaration
- Shows "X references" above each type declaration
- Click to open references panel
- Updates in real-time as you code
Example:
# 3 references
func processData(input Tensor) Tensor {
return matmul(input, weights)
}
Benefits:
- Instantly see which functions are heavily used
- Identify unused code
- Navigate to all usages with one click
- Understand code impact at a glance
Interactive tree view of function calls:
- Incoming Calls - See who calls this function
- Outgoing Calls - See what this function calls
- Navigate through the call graph
- Understand code flow and dependencies
How to use:
- Right-click on a function name
- Select "Show Call Hierarchy"
- Explore incoming and outgoing calls
- Click to navigate to any call site
Perfect for:
- Understanding complex codebases
- Refactoring with confidence
- Tracing execution paths
- Documenting architecture
Real-time validation of Scatter language constructs:
Validates scope annotations and distributed execution:
- Checks
@scopesannotations are valid (cloud, edge, device, local) - Recognizes custom
scopegroupdeclarations - Warns if
@parallelfunctions lack@scopes - Detects cross-scope violations
Example:
@scopes:mobile # β οΈ Warning: Unknown scope 'mobile'
func process() {}
@parallel # β οΈ Warning: @parallel should specify @scopes
func distribute() {}
Prevents accidental data leaks:
- Detects
print()of@privacy:secretdata (ERROR) - Warns about returning sensitive data without
own() - Catches cross-scope data access
- Enforces privacy best practices
Example:
@privacy:secret
userData := loadUser()
print(userData) # β ERROR: Printing @privacy:secret variable
func getUser() UserData {
@privacy:secret
user := loadUser()
return user # β οΈ WARNING: Return without own() wrapper
}
Security levels:
@privacy:secret- Highly sensitive (passwords, keys)@privacy:confidential- Confidential user data@privacy:public- Public data (no restrictions)
- Automatic bracket matching and closing
- Comment toggling with
#(line) and/* */(block) - Smart indentation
- Code folding for functions, types, and blocks
# Define execution tiers
scopegroup backend {
cloud
edge
}
# Private user data stays on device
@scopes:device
@privacy:secret
type UserCredentials struct {
userId string
token string
}
# Public data can be distributed
@scopes:backend
type SensorReading struct {
sensorId string
value float
timestamp float
}
# Parallel processing across workers
@scopes:worker
@parallel
func processReadings(readings []SensorReading) float {
mut sum := 0.0
for reading in readings {
sum = sum + reading.value
}
return sum / toFloat(len(readings))
}
func main() {
readings := []SensorReading{
SensorReading{sensorId: "s1", value: 23.5, timestamp: 1000.0},
SensorReading{sensorId: "s2", value: 24.1, timestamp: 1001.0}
}
avg := processReadings(readings)
print(avg)
}
import ml.ggml
import ml.llm
import io.files
@scopes:edge
@requirements:GPU,16GB_RAM
@parallel
func runDistributedInference(shardId int, totalShards int) {
# Load model shard
config := LLMConfig{
modelPath: "models/mistral-7b.gguf",
maxTokens: 2048,
temperature: 0.7
}
llm := loadDistributedLLM(config, shardId, totalShards)
print("Node", shardId, "ready with", llm.layers, "layers")
# Generate text
prompt := "Explain distributed computing in simple terms:"
result := generate(llm, prompt)
print("Generated:")
print(result.text)
print("Tokens:", result.tokensGenerated)
print("Time:", result.timeMs, "ms")
}
func main() {
# Distributed across 4 edge nodes
for i := 0; i < 4; i++ {
runDistributedInference(i, 4)
}
}
import ml.tensor
func trainNeuralNet() {
# Create input and weight tensors
input := tensor([1.0, 2.0, 3.0, 4.0], [4, 1])
weights := randn([4, 3])
# Forward pass
hidden := matmul(input, weights)
activated := relu(hidden)
output := softmax(activated)
print("Output probabilities:", output)
}
import io.files
import ml.ggml
func loadModelSafe(modelName string) GGUFModel {
# Check model cache
cacheDir := joinPath(getTempDir(), "scatter_models")
modelPath := joinPath(cacheDir, modelName + ".gguf")
if !exists(modelPath) {
print("Model not found:", modelPath)
return GGUFModel{}
}
# Verify size
size := fileSize(modelPath)
print("Loading model:", size / 1024 / 1024, "MB")
# Load with GGML
model := loadGGUF(modelPath)
print("Model loaded:", model.config.numLayers, "layers")
return model
}
func downloadAndCache(url string, modelName string) string {
cacheDir := joinPath(getTempDir(), "scatter_models")
createDirAll(cacheDir)
# Download to temp location first
tempPath := createTempFileExt("download", ".gguf")
# downloadFile(url, tempPath) # Would use HTTP client
# Move to cache atomically
cachedPath := joinPath(cacheDir, modelName + ".gguf")
moveFile(tempPath, cachedPath)
return cachedPath
}
import io.files
func processLargeLogFile(logPath string) {
# Open for streaming
file := openRead(logPath)
defer close(file)
mut errorCount := 0
mut lineNum := 0
# Process line by line
while true {
line := readLine(file)
if len(line) == 0 {
break # EOF
}
lineNum = lineNum + 1
if contains(line, "ERROR") {
errorCount = errorCount + 1
print("Error at line", lineNum, ":", line)
}
}
print("Total errors:", errorCount)
}
import ml.tensor
import ml.distributed
import ml.llm
import io.files
@scopes:edge
@parallel
func distributedInferencePipeline(
shardId int,
totalShards int,
inputFile string,
outputDir string
) {
# Load model shard
config := LLMConfig{
modelPath: "models/mistral-7b.gguf",
maxTokens: 512
}
llm := loadDistributedLLM(config, shardId, totalShards)
# Create output directory
shardDir := joinPath(outputDir, "shard_" + toString(shardId))
createDirAll(shardDir)
# Process input file
content := readFile(inputFile)
lines := split(content, "\n")
# Process each line
for i := 0; i < len(lines); i++ {
if i % totalShards != shardId {
continue # Skip lines not for this shard
}
prompt := lines[i]
result := generate(llm, prompt)
# Save result
outputFile := joinPath(shardDir, "output_" + toString(i) + ".txt")
writeFile(outputFile, result.text)
}
print("Shard", shardId, "completed")
}
func main() {
# Run distributed pipeline across 4 nodes
for i := 0; i < 4; i++ {
distributedInferencePipeline(i, 4, "inputs.txt", "results/")
}
}
- Download the
.vsixfile - Open VSCode
- Go to Extensions (Ctrl+Shift+X)
- Click the
...menu and select "Install from VSIX..." - Select the downloaded file
cd vscode-scatter
npm install
npm run compile
npm run packageThen install the generated .vsix file.
# Core
func, type, struct, const, mut, import, scopegroup, distribute, capabilities
# Control Flow
if, else, for, while, when, match, case, default, in
# Flow Control
return, break, continue, defer
# Literals
nil, true, false
# Primitive
int, float, string, bool, byte
# Complex
Owned[T] # Owned/secured data
Option[T] # Optional values (some/none)
Result[T,E] # Result type (ok/err)
Map[K,V] # Key-value map
Pair[T,U] # Tuple pair
[]T # Arrays
?T # Optional types
# ml.tensor
Tensor # N-dimensional tensor
Shape # Tensor shape
DType # Data type (Float32, Int8, etc.)
QTensor # Quantized tensor
# ml.distributed
TensorShard # Distributed tensor shard
# ml.ggml
GGMLTensor # Native GGML tensor
GGMLContext # GGML computation context
GGUFModel # GGUF model (llama.cpp format)
# ml.llm
DistributedLLM # Distributed LLM instance
LLMConfig # LLM configuration
GenerationResult # Generation output
# io.files
File # File handle for streaming
FileInfo # File metadata (size, timestamps)
DirEntry # Directory entry
FileResult # Operation result with error
| Annotation | Description |
|---|---|
@scopes:tier |
Execution tier constraints (cloud, edge, device, etc.) |
@scope(tier) |
Execution tier (parenthesis style) |
@privacy:level |
Data privacy level (secret, confidential, public) |
@owned |
Cryptographically signed data with ownership |
@parallel |
Distributed parallel execution across nodes |
@size:large |
Large data hint for local processing |
@replicated:crdt |
CRDT replication strategy (gcounter, orset, etc.) |
@requirements:res |
Resource requirements (GPU, memory, storage) |
@capability:cap |
Node capability requirement (database, redis, gpu) |
@bridge("name") |
Bridge to native code function |
@prefer:tier |
Soft placement preference |
@require:tier |
Hard placement requirement |
@near:data |
Data locality hint for optimization |
@lowLatency |
Optimize for low latency |
@lowPower |
Optimize for low power consumption |
@gpu |
Require GPU execution |
@test |
Mark function as test |
# Output
print(...args)
# Arrays
len(arr) int
append(arr, ...elements) array
# Math
sqrt(x float) float
abs(x) number
# Type Conversion
toString(x) string
toFloat(x) float
toInt(x) int
# Ownership
own(data Owned) Owned
wrap(data, key) WrappedData
unwrap(data, key) data
canAccess(scope string) bool
delegate(token, scope) Token
unwrapWithToken(data, token) data
# Creation
tensor(data []float, dims []int) Tensor
zeros(dims []int) Tensor
ones(dims []int) Tensor
randn(dims []int) Tensor
# Operations
matmul(a Tensor, b Tensor) Tensor
add(a Tensor, b Tensor) Tensor
mul(a Tensor, b Tensor) Tensor
# Activations
relu(t Tensor) Tensor
sigmoid(t Tensor) Tensor
tanh(t Tensor) Tensor
softmax(t Tensor) Tensor
# Shape operations
reshape(t Tensor, newDims []int) Tensor
transpose(t Tensor) Tensor
sum(t Tensor, axis int) Tensor
mean(t Tensor, axis int) Tensor
# Quantization
quantize(t Tensor, bits int) QTensor
dequantize(qt QTensor) Tensor
# Sharding
shardTensor(t Tensor, numShards int, axis int) []TensorShard
gatherShards(shards []TensorShard) Tensor
# Distributed computation
@parallel
distributedMatmul(aShard TensorShard, b Tensor) TensorShard
allReduceSum(shard TensorShard) TensorShard
allReduceMean(shard TensorShard) TensorShard
broadcast(t Tensor, rootNode int)
scatter(t Tensor, numNodes int) []TensorShard
# Model sharding
shardModelLayers(numLayers int, numNodes int) []LayerShard
pipelineForward(input Tensor, layers []LayerShard) Tensor
# Context management
ggmlInit(memSize int) GGMLContext
ggmlFree(ctx GGMLContext)
# Model loading
loadGGUF(path string) GGUFModel
# Operations
ggmlTensor(ctx GGMLContext, dims []int) GGMLTensor
ggmlMatmul(ctx GGMLContext, a GGMLTensor, b GGMLTensor) GGMLTensor
ggmlRope(ctx GGMLContext, t GGMLTensor, pos int) GGMLTensor
ggmlRMSNorm(ctx GGMLContext, t GGMLTensor) GGMLTensor
ggmlQuantize(ctx GGMLContext, t GGMLTensor, type GGMLQuantType) GGMLTensor
ggmlCompute(ctx GGMLContext, graph GGMLGraph)
# Model loading
@scopes:edge
loadDistributedLLM(config LLMConfig, shardId int, totalShards int) DistributedLLM
# Generation
@scopes:edge
generate(llm DistributedLLM, prompt string) GenerationResult
# Tokenization
tokenize(llm DistributedLLM, text string) []int
detokenize(llm DistributedLLM, tokens []int) string
# Forward pass
computeEmbeddings(llm DistributedLLM, tokens []int) Tensor
computeAttention(q Tensor, k Tensor, v Tensor) Tensor
computeFFN(x Tensor, w1 Tensor, w2 Tensor) Tensor
# Sampling
sampleToken(logits Tensor, temperature float, topP float) int
updateKVCache(cache KVCache, k Tensor, v Tensor, pos int) KVCache
# Basic I/O
readFile(path string) string
readBytes(path string) []int
writeFile(path string, content string) bool
writeBytes(path string, data []int) bool
appendFile(path string, content string) bool
# File management
exists(path string) bool
deleteFile(path string) bool
copyFile(src string, dest string) bool
moveFile(src string, dest string) bool
# Metadata
fileSize(path string) int
stat(path string) FileInfo
isDir(path string) bool
isFile(path string) bool
modTime(path string) int
getTempDir() string
createTempFile(prefix string) string
createTempFileExt(prefix string, ext string) string
createTempDir(prefix string) string
cleanTempFiles(pattern string, maxAge int) int
joinPath(parts ...string) string
basename(path string) string
dirname(path string) string
extension(path string) string
stem(path string) string
absolutePath(path string) string
normalizePath(path string) string
listDir(path string) []DirEntry
listDirRecursive(path string) []DirEntry
createDir(path string) bool
createDirAll(path string) bool
removeDir(path string) bool
removeDirAll(path string) bool
glob(pattern string) []string
walk(root string, fn func(DirEntry))
# Open files
openRead(path string) File
openWrite(path string) File
openAppend(path string) File
# Read/Write
readLine(file File) string
read(file File, n int) string
write(file File, content string) bool
close(file File)
Built-in test runner for executing @test annotated functions with visual feedback and results.
- Test Discovery: Automatically finds all
@testfunctions - Code Lens Actions:
βΆ Run All Tests (N)- Execute all tests in fileβΆ Run Test- Run individual testπ Debug Test- Debug individual test
- Real-time Output: Dedicated "Scatter Tests" output channel
- Visual Feedback: Status bar with pass/fail indicators
- Test Results: Duration tracking, pass/fail status, error messages
import ml.tensor
# Mark test functions with @test annotation
@test
func testTensorCreation() {
t := tensor([1.0, 2.0, 3.0, 4.0], [2, 2])
assert(len(t.shape.dims) == 2, "Tensor should be 2D")
assert(t.shape.dims[0] == 2, "First dimension should be 2")
print("β testTensorCreation passed")
}
@test
@scopes:edge
@parallel
func testDistributedComputation() {
data := zeros([1000, 1000])
result := matmul(data, data)
assert(result.shape.dims[0] == 1000, "Shape should match")
print("β testDistributedComputation passed")
}
# Helper function for assertions
func assert(condition bool, message string) {
if !condition {
print("Assertion failed:", message)
}
}
Running Tests:
- Click
βΆ Run All Tests (N)at the top of the file - Or click
βΆ Run Testabove individual test functions - View results in "Scatter Tests" output channel
Test Output Example:
Running 8 tests...
βΆ Running: testTensorCreation
β testTensorCreation (234ms)
βΆ Running: testDistributedComputation
β testDistributedComputation (456ms)
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Results: 8 passed, 0 failed
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Comprehensive debugging support with pre-configured launch configurations.
Debug Scatter Program:
- Debug the currently open Scatter file
- Set breakpoints, inspect variables, step through code
Debug Scatter Test:
- Debug test functions with test-specific configuration
- Inspect test data and assertion failures
Attach to Scatter Process:
- Attach debugger to a running Scatter process
- Debug already-running applications
- Breakpoints: Set, disable, and manage breakpoints
- Step Execution: Step over, step into, step out
- Variable Inspection: View local and global variables
- Watch Expressions: Monitor specific values
- Call Stack: Navigate function call hierarchy
Launch Configurations (.vscode/launch.json):
{
"configurations": [
{
"name": "Debug Scatter Program",
"type": "scatter",
"request": "launch",
"program": "${file}",
"stopOnEntry": false
},
{
"name": "Debug Scatter Test",
"type": "scatter",
"request": "launch",
"program": "${file}",
"testMode": true
}
]
}Contributions are welcome! Please feel free to submit issues and pull requests.
MIT License