logo
continuiti
Background
Initializing search
    aai-institute/continuiti
    aai-institute/continuiti
    • Home
      • First Steps
      • Operators
      • Functions
      • Training
      • FNO
      • Time Series
      • Super-resolution
      • Physics-informed
      • Meshes
      • Self-supervised
      • Architectures
      • API
        • Benchmarks
          • Benchmark
          • Flame
          • Navierstokes
          • Sine
          • Run
            • Run config
            • Runner
        • Data
          • Dataset
          • Mesh
          • Selfsupervised
          • Utility
          • Function
            • Function
            • Function dataset
            • Function set
        • Discrete
          • Box sampler
          • Regular grid
          • Sampler
          • Uniform
        • Networks
          • Attention
          • Deep residual network
          • Fully connected
          • Multi head attention
          • Scaled dot product attention
        • Operators
          • Belnet
          • Cnn
          • Dco
          • Deeponet
          • Dno
          • Fno
          • Fourierlayer
          • Integralkernel
          • Losses
          • Neuraloperator
          • Operator
          • Shape
        • Pde
          • Grad
          • Physicsinformed
        • Trainer
          • Callbacks
          • Criterion
          • Device
          • Logs
          • Scheduler
          • Trainer
        • Transforms
          • Compose
          • Quantile scaler
          • Scaling
          • Transform
      • Changelog
      • Contributing
    • Benchmarks

    Background

    In this section, we provide more background on operator learning and its implementation in continuiti.

    Architectures

    Neural operator architectures in continuiti


    Last update: 2024-08-20
    Created: 2024-08-20
    Copyright © appliedAI Institute for Europe gGmbH
    Made with Material for MkDocs