Skip to content

add addressRange for mmu accesFault check#159

Closed
colle-chaude wants to merge 2 commits intoSpinalHDL:mainfrom
colle-chaude:contribution
Closed

add addressRange for mmu accesFault check#159
colle-chaude wants to merge 2 commits intoSpinalHDL:mainfrom
colle-chaude:contribution

Conversation

@colle-chaude
Copy link

Issue : https://github.com/litex-hub/pythondata-cpu-naxriscv/issues/7

The MMU need to check if an address correspond to an existing endpoint.
!(memRange(TRANSLATED) || IO) doesn't cover the whole range

Replace memRange by addressRange that cover all valid addresses.

@Dolu1990
Copy link
Member

Dolu1990 commented Dec 4, 2025

Hi,

Ahhhh now i understand better :
ioRange : UInt => Bool = _(31 downto 28) === 0x1,
memRange : UInt => Bool = _(31),
fetchRange : UInt => Bool = _(31 downto 28) =/= 0x1,

memRange => all the non io memory range
ioRange => all the io memory range
fetchRange => all the executable memory range, !! has to be contained in memRange and ioRange !!

That was the initial idea. So, fetchRange isn't defining a new region of memory, but is a "filter" on the one already existing.
I'm not sure the PR is necessary, right ?

Or can you tell me more about the setup when things broke on your tests ?

@colle-chaude
Copy link
Author

colle-chaude commented Dec 5, 2025

(That also continue this discussion litex-hub/pythondata-cpu-naxriscv#7)

Here is the memory regions given by liteX :

logs from litex :

IO Regions: (1)
io0                 : Origin: 0x80000000, Size: 0x80000000, Mode:  RW, Cached: False, Linker: False
Bus Regions: (7)
rom                 : Origin: 0x00000000, Size: 0x00020000, Mode:  RX, Cached:  True, Linker: False
sram                : Origin: 0x10000000, Size: 0x00002000, Mode: RWX, Cached:  True, Linker: False
main_ram            : Origin: 0x40000000, Size: 0x08000000, Mode: RWX, Cached:  True, Linker: False
opensbi             : Origin: 0x40f00000, Size: 0x00080000, Mode:  RW, Cached:  True, Linker:  True
csr                 : Origin: 0xf0000000, Size: 0x00010000, Mode:  RW, Cached: False, Linker: False
clint               : Origin: 0xf0010000, Size: 0x00010000, Mode:  RW, Cached: False, Linker:  True
plic                : Origin: 0xf0c00000, Size: 0x00400000, Mode:  RW, Cached: False, Linker:  True

then from nax execution

[info] memoryRegions: Seq[naxriscv.platform.litex.LitexMemoryRegion] = ArrayBuffer(LitexMemoryRegion(SM(0x80000000, 0x80000000),io,p), LitexMemoryRegion(SM(0x0, 0x20000),rxc,p), LitexMemoryRegion(SM(0x10000000, 0x2000),rwxc,p), LitexMemoryRegion(SM(0x40000000, 0x8000000),rwxc,m), LitexMemoryRegion(SM(0xf0000000, 0x10000),rw,p))
[info] LitexMemoryRegion(SM(0x80000000, 0x80000000),io,p)
[info] LitexMemoryRegion(SM(0x0, 0x20000),rxc,p)
[info] LitexMemoryRegion(SM(0x10000000, 0x2000),rwxc,p)
[info] LitexMemoryRegion(SM(0x40000000, 0x8000000),rwxc,m)
[info] LitexMemoryRegion(SM(0xf0000000, 0x10000),rw,p)

This is interpreted by LitexMemoryRegion in src/main/scala/naxriscv/platform/litex/NaxGen.scala :

case class LitexMemoryRegion(mapping : SizeMapping, mode : String, bus : String){
  def isIo = mode.contains("i") || mode.contains("o")
  def isExecutable = mode.contains("x")
  def isCachable = mode.contains("c")
  def onPeripheral = bus match {
    case "m" => false
    case "p" => true
  }
  def onMemory = !onPeripheral
}

Based on that, I understood that for litex point of view :

  • peripheral stand to everything not DRAM
  • memory DRAM
  • IO (different from peripheral) for signals exiting the chip like uart/spi...
  • C, cachable for memoy (rom/ram/flash...)

Which is not exactly the same as you intend, I think

My interpretation is that litex and nax doesn't give the same definition of IO and Peripheral, am I right ?

So to make Nax compatible with LitEx, either aply my PR, or change LitexMemoryRegion to interpret as onMemory not only RAM but all not IO.

This is related to my question in litex-hub/pythondata-cpu-naxriscv#7) :

Other than that, I am confused about io/pripheral/...

If I understand well, an address can be IO or not, memory or peripheral , cachable or not, at least with Litex.
In the LSU the load bypass the cache if IS_IO (if the address is taged as IO) otherwise it asks to the cache. Would it be more accurate to make this decision on the cachable tag ?

I think there is the same kind of issue with isCachable/IO

I'm not sure if I am clear, tel me

@colle-chaude
Copy link
Author

To be more specific,

The slow bus is called as peripheral, litex IO tag means something else. But the ioRange used in the MMU is based on the litex's IO tag def isIo = mode.contains("i") || mode.contains("o") which forget peripheral addresses that are not IO's.

@Dolu1990
Copy link
Member

Dolu1990 commented Dec 9, 2025

Hi ^^
Ahh i think a remember the trick.

LitexMemoryRegion(SM(0x80000000, 0x80000000),io,p) and
LitexMemoryRegion(SM(0xf0000000, 0x10000),rw,p)

overlap, and this is cursed.
So, the SoC just reject "p" && "io" regions and expect that litex will anyway specify every true IO peripherals individualy :
[info] LitexMemoryRegion(SM(0x0, 0x20000),rxc,p)
[info] LitexMemoryRegion(SM(0x10000000, 0x2000),rwxc,p)
[info] LitexMemoryRegion(SM(0xf0000000, 0x10000),rw,p)

So yes there kinda a missmatch in the way Nax and litex handle LitexMemoryRegion.

I don't know what the litex LitexMemoryRegion(SM(0x80000000, 0x80000000),io,p) is realy about.

So to make Nax compatible with LitEx, either aply my PR, or change LitexMemoryRegion to interpret as onMemory not only RAM but all not IO.

Can you tell me about it ? What LitexMemoryRegion is specified by litex for it ?

@colle-chaude
Copy link
Author

Okay as far as I know everything is defined in litex/soc/cores/cpu/naxriscv/core.py in https://github.com/enjoy-digital/litex

Io region range is defined
io_regions = {0x8000_0000: 0x8000_0000} # Origin, Length.

Then the memory mapping

# Memory Mapping.
  @property
  def mem_map(self): # TODO
      return {
          "rom":      0x0000_0000,
          "sram":     0x1000_0000,
          "main_ram": 0x4000_0000,
          "csr":      0xf000_0000,
          "clint":    0xf001_0000,
          "plic":     0xf0c0_0000,
      }

then in litex/soc/integration/soc.py io_region and mem_map are processed and added to memory megion with self.bus.add_region

Then back to litex/soc/cores/cpu/naxriscv/core.py all memory region are compiled to generate NaxRiscv.memory_regions list

        for name, region in self.soc_bus.io_regions.items():
            NaxRiscv.memory_regions.append( (region.origin, region.size, "io", "p") ) # IO is only allowed on the p bus
        for name, region in self.soc_bus.regions.items():
            if region.linker: # Remove virtual regions.
                continue
            if len(self.memory_buses) and name == 'main_ram': # m bus
                bus = "m"
            else:
                bus = "p"
            mode = region.mode
            mode += "c" if region.cached else ""
            NaxRiscv.memory_regions.append( (region.origin, region.size, mode, bus) )

at the end litex/soc/cores/cpu/naxriscv/core.py generate scala commande

        for region in NaxRiscv.memory_regions:
            gen_args.append(f"--memory-region={region[0]},{region[1]},{region[2]},{region[3]}")

line 158 is precised this :

        self.periph_buses     = [pbus] # Peripheral buses (Connected to main SoC's bus).
        self.memory_buses     = []           # Memory buses (Connected directly to LiteDRAM).

@Dolu1990
Copy link
Member

io_regions = {0x8000_0000: 0x8000_0000} # Origin, Length. will be ignored on purpose.

Overall, what peripheral do you have issues accessing ?

@colle-chaude
Copy link
Author

colle-chaude commented Dec 16, 2025

rom                 : Origin: 0x00000000, Size: 0x00020000, Mode:  RX, Cached:  True, Linker: False
sram                : Origin: 0x10000000, Size: 0x00002000, Mode: RWX, Cached:  True, Linker: False
[info] LitexMemoryRegion(SM(0x0, 0x20000),rxc,p)
[info] LitexMemoryRegion(SM(0x10000000, 0x2000),rwxc,p)

rom and sram are not io nor memory, so mmu raise trap

@Dolu1990
Copy link
Member

Dolu1990 commented Dec 19, 2025

Shouldn't the fix be done here : (?)
https://github.com/litex-hub/pythondata-cpu-naxriscv/blob/main/pythondata_cpu_naxriscv/verilog/configs/gen.scala

(instead of modifying upstream Nax)

@colle-chaude
Copy link
Author

May be it's possible to find a way, but I don't know how

@cklarhorst
Copy link
Contributor

I’d be happy to help, could you share a concrete example that shows what's currently broken?

I just ran the following on the latest litex master and could not reproduce any issue:
litex_sim --cpu-type naxriscv

That booted a working bios.
This also worked for me:

litex> mem_list
Available memory regions:
OPENSBI  0x40f00000 0x80000
PLIC     0xf0c00000 0x400000
CLINT    0xf0010000 0x10000
ROM      0x00000000 0x20000
SRAM     0x10000000 0x2000
CSR      0xf0000000 0x10000

litex> mem_read 0x00000000 128
Memory dump:
0x00000000  6f 00 00 0b 13 00 00 00 13 00 00 00 13 00 00 00  o...............
0x00000010  13 00 00 00 13 00 00 00 13 00 00 00 13 00 00 00  ................
0x00000020  23 2e 11 fe 23 2c 51 fe 23 2a 61 fe 23 28 71 fe  #...#,Q.#*a.#(q.
0x00000030  23 26 a1 fe 23 24 b1 fe 23 22 c1 fe 23 20 d1 fe  #&..#$..#"..# ..
0x00000040  23 2e e1 fc 23 2c f1 fc 23 2a 01 fd 23 28 11 fd  #...#,..#*..#(..
0x00000050  23 26 c1 fd 23 24 d1 fd 23 22 e1 fd 23 20 f1 fd  #&..#$..#"..# ..
0x00000060  13 01 01 fc ef 40 40 40 83 20 c1 03 83 22 81 03  .....@@@. ..."..
0x00000070  03 23 41 03 83 23 01 03 03 25 c1 02 83 25 81 02  .#A..#...%...%..

litex> mem_read 0x10000000 128
Memory dump:
0x10000000  a4 0d 00 00 78 49 00 00 8c 49 00 00 00 00 00 00  ....xI...I......
0x10000010  d4 0e 00 00 a4 49 00 00 a8 49 00 00 00 00 00 00  .....I...I......
0x10000020  78 0e 00 00 d8 49 00 00 e0 49 00 00 00 00 00 00  x....I...I......
0x10000030  ac 0d 00 00 fc 49 00 00 04 4a 00 00 00 00 00 00  .....I...J......
0x10000040  20 12 00 00 2c 4d 00 00 34 4d 00 00 02 00 00 00   ...,M..4M......
0x10000050  c0 14 00 00 4c 4d 00 00 58 4d 00 00 02 00 00 00  ....LM..XM......
0x10000060  0c 14 00 00 6c 4d 00 00 78 4d 00 00 02 00 00 00  ....lM..xM......
0x10000070  1c 11 00 00 8c 4d 00 00 98 4d 00 00 02 00 00 00  .....M...M......

@Dolu1990
Copy link
Member

Dolu1990 commented Jan 6, 2026

@cklarhorst
I got confused as well, it wasn't clear when the issue was open, but this is about getting NaxRiscv upstream to work in Litex :)
The NaxRiscv version used by default in litex is OK (but old)

@cklarhorst
Copy link
Contributor

Ok so the issue is that memRange is missing: here.
Dolu explained the intent of the variables well here: here
Colle-chaude already nearly had the perfect solution here (together with his next answer) I think. Renaming his addressRange parameter to memRange would likely have worked and I would have been a one-line fix.

@Dolu1990

  • How timing critical are these decisions?
  • Would using non fragmented address ranges save LUTs here or am I mising something?
  • We could also set allow_read, allow_write, but would it help anywhere or only cost hardware?
  • Wouldn't it be better to use a single continuous range for IO (like how it is handled in litex) and use ACCESS_FAULT to reject unmapped regions?

My guess is you'll say this doesn't matter much :D. Then I would propose:
ioRange = memoryRegions.filter(!.isCachable).map(.mapping.hit(address)).orR
memRange = memoryRegions.filter(.isCachable).map(.mapping.hit(address)).orR
-> because for access_fault those are combined anyway

This results in potentially fragmented ioRange and memRange, but:
-> In litex we could cleanup the memory_region generation and get rid of the io_region
-> In litex it is already enforced that caching regions can only be placed in non-io regions and non-caching can only be placed in an io-region so the mapping should be safe

Although, I think it would be clearer to have a validRange (covering all regions) alongside an ioRange, but that would require changing something in the NaxRiscv repository.
Additionally, we should definitely add a bit of documentation the the ioRange, memRange (Properties of the Region?) peripheralRange (Routing of the Region?)

I have tested the !_.isCacheable _.isCacheable version and was able to boot into the bios in litex_sim.

@colle-chaude would you like to create the final PR for this (after dolu's approves)? If not, I can take care of it.

@Dolu1990
Copy link
Member

Dolu1990 commented Jan 8, 2026

Hi,

Got NaxRiscv upstream to work, i had to do the following changes (not using any PR):

import spinal.core._
import spinal.lib._
import naxriscv.compatibility._
import naxriscv.frontend._
import naxriscv.fetch._
import naxriscv.misc._
import naxriscv.execute._
import naxriscv.fetch._
import naxriscv.lsu._
import naxriscv.prediction._
import naxriscv.utilities._
import naxriscv.debug._
import naxriscv._

println(memoryRegions.mkString("\n"))
def ioRange (address : UInt) : Bool = memoryRegions.filter(_.isIo).map(_.mapping.hit(address)).orR
def fetchRange (address : UInt) : Bool = memoryRegions.filter(_.isExecutable).map(_.mapping.hit(address)).orR
def peripheralRange (address : UInt) : Bool = memoryRegions.filter(_.onPeripheral).map(_.mapping.hit(address)).orR
def memoryRange (address : UInt) : Bool = memoryRegions.filter(!_.isIo).map(_.mapping.hit(address)).orR

plugins ++= Config.plugins(
  xlen = xlen,
  ioRange = ioRange,
  fetchRange = fetchRange,
  memRange   = memoryRange,
  resetVector = resetVector,
  aluCount    = arg("alu-count", 2),
  decodeCount = arg("decode-count", 2),
  debugTriggers = 0,
  withRvc = arg("rvc", false),
  withLoadStore = true,
  withMmu = arg("mmu", true),
  withFloat  = arg("rvf", false),
  withDouble = arg("rvd", false),
  withDebug = debug,
  withEmbeddedJtagTap = false,
  withEmbeddedJtagInstruction = false,
  withCoherency = true,      
  withRdTime = true
)

Here the simulation command i ran :
litex_sim --cpu-type=naxriscv --with-sdram --sdram-data-width=64 --bus-standard axi-lite --scala-args='rvc=false,rvf=false,rvd=false,alu-count=1,decode-count=1' --update-repo=no

How timing critical are these decisions?

TRANSLATED is the very critical one. the others are more relaxed.

Would using non fragmented address ranges save LUTs here or am I mising something?

On Tilelink, the idea is that a memory request should never be addressed to a non-existing peripheral.
It is a bit a different phylosophy than other memory busses. The Tilelink address decoders implementation can't handle transaction with no real destination.

We could also set allow_read, allow_write, but would it help anywhere or only cost hardware?

?

Wouldn't it be better to use a single continuous range for IO (like how it is handled in litex) and use ACCESS_FAULT to reject unmapped regions?

So, it kinda the phylosophy of Tilelink to always have a exact knowledge of what the memory space is. What you may gain in the CPU by having "lazy" address checking, you may lose in the memory interconnect because there you would need to check the exact address decoding.

Probably having exact checks in the CPU is at the end of the day more costly (ex multiple CPU => multiple check => more hardware), but, ho well, if you have multiple CPU, that cost isn't so big anymore compared to everything else ^^

One scenario where having exact knowledge in the CPU is usefull, is for instance if you have a big IO region ( a slow wishbone / APB3 bus), and in it, you have some small scratchpad that you want your CPU to know he is allowed to cache. (exact precise knowledge of what is allowed)

In litex we could cleanup the memory_region generation and get rid of the io_region

This may have ripple effects on poeple assuming it to exists. (<3 legacy <3)

I have tested the !_.isCacheable _.isCacheable version and was able to boot into the bios in litex_sim.

Nice ^^ Using this PR ?

@cklarhorst
Copy link
Contributor

oh sorry now I have more questions :D

In litex we could cleanup the memory_region generation and get rid of the io_region

This may have ripple effects on poeple assuming it to exists. (<3 legacy <3)

In my opinion, the current definition of the memory_regions is a bit vague.

  • vexiiriscv doesn't add any region with the IO mode at all
  • naxriscv always adds all litex IO regions (typically there is only one continuous big region, contains addresses to non-existing peripheral!) with mode "IO" and always without any of "rwx" + it will add a region for all peripheral that exists -> therefore there is that overlap between IO and other added regions

so then your

On Tilelink, the idea is that a memory request should never be addressed to a non-existing peripheral.
It is a bit a different phylosophy than other memory busses. The Tilelink address decoders implementation can't handle transaction with no real destination.

is in my opinion, currently broken because

TRANSLATED := ps.preAddress.resized
ALLOW_EXECUTE := True
ALLOW_READ := True
ALLOW_WRITE := True
PAGE_FAULT := False
ACCESS_FAULT := ps.preAddress.drop(physicalWidth) =/= 0 || !(memRange(TRANSLATED) || IO)

ACCESS_FAULT currently allows requests to the memory regions with mode "IO" and for litex that currently means the whole IO range including unmapped space (I have not tested it but I would guess that it currently still works because the tilelink interconnect will route it to the pbus and then the litex side will handle the unmapped requests somehow.

So maybe I would like to improve the documentation here:
https://github.com/enjoy-digital/litex/blob/0f6b897fb0ad5019470bd06f3770fbf27b080a6d/litex/soc/cores/cpu/naxriscv/core.py#L555-L561

  • currently, it is not clear whether overlap between regions is allowed
  • what IO means or where it differs from the "non c" mode
  • for the mode field is it allowed to have io and rwxc or is it either io or "rwxc"

One scenario where having exact knowledge in the CPU is useful, is for instance if you have a big IO region ( a slow wishbone / APB3 bus), and in it, you have some small scratchpad that you want your CPU to know he is allowed to cache. (exact precise knowledge of what is allowed)

  • from litex definition of IO region it is not allowed to have cachable space in it and what then would be the definition of "IO" -> is it called IO region because it goes to a slow bus (but that would be redundant with the "p" and "m" in the mode)

In my opinion it would be best to improve the definition of what the "IO" mode means or change the nax litex code in the same way vexii is doing it, so not adding the litex io-region.
In my option it would be cool to p,m for tilelink routing + either "io" for mmu isIO (allowing invalid regions) or "rwxc" (describing only "valid" regions, might have overlap with "io" region)

writing this I just realized the litex io definition is just a hardware-friendly way of describing cacheable allowed or not.

(sorry for the long and maybe confusing post)

@Dolu1990
Copy link
Member

In my opinion, the current definition of the memory_regions is a bit vague.

Yes, it kinda always was a mess ^^

is in my opinion, currently broken because

Right.

VexiiRiscv is better then NaxRiscv :

  • the memoryRegions in VexiiRiscv specifies the SoC output memory buses, then VexiiRiscv infers everything
  • the memoryRegions in NaxRiscv specifies the SoC output memory buses, but also the CPU Config.plugin directly

So your solution not using isIo but instead isCachable is probably better.

So here is one proposal for Gen.scala (the 0xF0010000l / 0xF0C00000l are for the clint / plic)

import spinal.core._
import spinal.lib._
import naxriscv.compatibility._
import naxriscv.frontend._
import naxriscv.fetch._
import naxriscv.misc._
import naxriscv.execute._
import naxriscv.fetch._
import naxriscv.lsu._
import naxriscv.prediction._
import naxriscv.utilities._
import naxriscv.debug._
import naxriscv._

val memoryRegionsNoIo = memoryRegions.filter(!_.isIo) //Remove all IO specifications
def ioRange (address : UInt) : Bool = memoryRegionsNoIo.filter(!_.isCachable).map(_.mapping.hit(address)).orR || SizeMapping(0xF0010000l, 0x10000).hit(address) || SizeMapping(0xF0C00000l, 0x400000).hit(address)
def fetchRange (address : UInt) : Bool = memoryRegionsNoIo.filter(_.isExecutable).map(_.mapping.hit(address)).orR
def peripheralRange (address : UInt) : Bool = memoryRegionsNoIo.filter(_.onPeripheral).map(_.mapping.hit(address)).orR
def memoryRange (address : UInt) : Bool = memoryRegionsNoIo.filter(_.isCachable).map(_.mapping.hit(address)).orR

plugins ++= Config.plugins(
  xlen = xlen,
  ioRange = ioRange,
  fetchRange = fetchRange,
  memRange   = memoryRange,
  resetVector = resetVector,
  aluCount    = arg("alu-count", 2),
  decodeCount = arg("decode-count", 2),
  debugTriggers = 0,
  withRvc = arg("rvc", false),
  withLoadStore = true,
  withMmu = arg("mmu", true),
  withFloat  = arg("rvf", false),
  withDouble = arg("rvd", false),
  withDebug = debug,
  withEmbeddedJtagTap = false,
  withEmbeddedJtagInstruction = false,
  withCoherency = true,      
  withRdTime = true
)

I have not tested it but I would guess that it currently still works because the tilelink interconnect will route it to the pbus and then the litex side will handle the unmapped requests somehow.

Yes right

currently, it is not clear whether overlap between regions is allowed

Not allowed

what IO means or where it differs from the "non c" mode

With that proposal for Gen.scala above, IO region will be ignored completly. (to avoid the big 0x80000000 litex IO region which would overlap other regions)

In my opinion it would be best to improve the definition of what the "IO" mode means

IO is often used to say manythings at once, so, not a very clear cut thing. (memory ordering preserved, cachable, with side-effects, ..)

writing this I just realized the litex io definition is just a hardware-friendly way of describing cacheable allowed or not.

Paine <3

(sorry for the long and maybe confusing post)

no worries, this is a confusing topic in general ^^

@cklarhorst
Copy link
Contributor

@Dolu1990
Your proposal for Gen.scala looks good to me.
I'm a bit sad that it will break compatibility with the current litex-recommended nax version.
The only why I see would be using reflection to check the parameters of the config but that looked ugly to me, don't know.

I also noticed that even without || SizeMapping(0xF0010000l, 0x10000).hit(address) || SizeMapping(0xF0C00000l, 0x400000).hit(address) the core is still able to access the PLIC, which confuses me a bit :D.

Accessing (PLIC 0xf0c00000 0x400000)

litex> mem_read 0xf0c00000 128
Memory dump:
0xf0c00000  00 00 00 00 01 00 00 00 01 00 00 00 01 00 00 00  ................
0xf0c00010  01 00 00 00 01 00 00 00 01 00 00 00 01 00 00 00  ................
0xf0c00020  00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
0xf0c00030  00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
0xf0c00040  00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
0xf0c00050  00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
0xf0c00060  00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
0xf0c00070  00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................

Right now, testing isn't much fun because of the submodule patches (#140).
I wanted to update the submodules because I saw that upstream RVLS already had a PR accepted, but I’m confused because the patch in this repo differs from the commits that landed upstream!
@Bill94l, do you still plan to push these patches upstream? Or do you need any help with that?

If there are no further objections, I'm happy to open the final pythondata-... PR.

@Dolu1990
Copy link
Member

I also noticed that even without || SizeMapping(0xF0010000l, 0x10000).hit(address) || SizeMapping(0xF0C00000l, 0x400000).hit(address) the core is still able to access the PLIC, which confuses me a bit :D.

?? are you sure ?? i mean, i just tried, and it doesn't even boot for me without it. (maybe you had a NaxRiscv verilog cached) ? or missed the memoryRegionsNoIo ?

Right now, testing isn't much fun because of the submodule patches (#140).

Yes, same :/

I saw that upstream RVLS already had a PR accepted

So, one thing, we can consider that NaxRiscv doesn't need to use RVLS upstream, and istead can use its own branch. To avoid different projects being coupled through RVLS main branch.

but I’m confused because the patch in this repo differs from the commits that landed upstream!

Yes, i'm confused aswell.

If there are no further objections, I'm happy to open the final pythondata-... PR.

Sure ^^

@colle-chaude
Copy link
Author

@cklarhorst thank you for your help

I supose I can close this topic now

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants