add addressRange for mmu accesFault check#159
add addressRange for mmu accesFault check#159colle-chaude wants to merge 2 commits intoSpinalHDL:mainfrom
Conversation
|
Hi, Ahhhh now i understand better : memRange => all the non io memory range That was the initial idea. So, fetchRange isn't defining a new region of memory, but is a "filter" on the one already existing. Or can you tell me more about the setup when things broke on your tests ? |
|
(That also continue this discussion litex-hub/pythondata-cpu-naxriscv#7) Here is the memory regions given by liteX : logs from litex : then from nax execution This is interpreted by Based on that, I understood that for litex point of view :
Which is not exactly the same as you intend, I think My interpretation is that litex and nax doesn't give the same definition of IO and Peripheral, am I right ? So to make Nax compatible with LitEx, either aply my PR, or change This is related to my question in litex-hub/pythondata-cpu-naxriscv#7) :
I think there is the same kind of issue with isCachable/IO I'm not sure if I am clear, tel me |
|
To be more specific, The slow bus is called as peripheral, litex IO tag means something else. But the ioRange used in the MMU is based on the litex's IO tag |
|
Hi ^^ LitexMemoryRegion(SM(0x80000000, 0x80000000),io,p) and overlap, and this is cursed. So yes there kinda a missmatch in the way Nax and litex handle LitexMemoryRegion. I don't know what the litex LitexMemoryRegion(SM(0x80000000, 0x80000000),io,p) is realy about.
Can you tell me about it ? What LitexMemoryRegion is specified by litex for it ? |
|
Okay as far as I know everything is defined in Io region range is defined Then the memory mapping then in Then back to at the end line 158 is precised this : |
|
io_regions = {0x8000_0000: 0x8000_0000} # Origin, Length. will be ignored on purpose. Overall, what peripheral do you have issues accessing ? |
rom and sram are not io nor memory, so mmu raise trap |
|
Shouldn't the fix be done here : (?) (instead of modifying upstream Nax) |
|
May be it's possible to find a way, but I don't know how |
|
I’d be happy to help, could you share a concrete example that shows what's currently broken? I just ran the following on the latest litex master and could not reproduce any issue: That booted a working bios. |
|
@cklarhorst |
|
Ok so the issue is that memRange is missing: here.
My guess is you'll say this doesn't matter much :D. Then I would propose: This results in potentially fragmented ioRange and memRange, but: Although, I think it would be clearer to have a validRange (covering all regions) alongside an ioRange, but that would require changing something in the NaxRiscv repository. I have tested the !_.isCacheable _.isCacheable version and was able to boot into the bios in litex_sim. @colle-chaude would you like to create the final PR for this (after dolu's approves)? If not, I can take care of it. |
|
Hi, Got NaxRiscv upstream to work, i had to do the following changes (not using any PR):
import spinal.core._
import spinal.lib._
import naxriscv.compatibility._
import naxriscv.frontend._
import naxriscv.fetch._
import naxriscv.misc._
import naxriscv.execute._
import naxriscv.fetch._
import naxriscv.lsu._
import naxriscv.prediction._
import naxriscv.utilities._
import naxriscv.debug._
import naxriscv._
println(memoryRegions.mkString("\n"))
def ioRange (address : UInt) : Bool = memoryRegions.filter(_.isIo).map(_.mapping.hit(address)).orR
def fetchRange (address : UInt) : Bool = memoryRegions.filter(_.isExecutable).map(_.mapping.hit(address)).orR
def peripheralRange (address : UInt) : Bool = memoryRegions.filter(_.onPeripheral).map(_.mapping.hit(address)).orR
def memoryRange (address : UInt) : Bool = memoryRegions.filter(!_.isIo).map(_.mapping.hit(address)).orR
plugins ++= Config.plugins(
xlen = xlen,
ioRange = ioRange,
fetchRange = fetchRange,
memRange = memoryRange,
resetVector = resetVector,
aluCount = arg("alu-count", 2),
decodeCount = arg("decode-count", 2),
debugTriggers = 0,
withRvc = arg("rvc", false),
withLoadStore = true,
withMmu = arg("mmu", true),
withFloat = arg("rvf", false),
withDouble = arg("rvd", false),
withDebug = debug,
withEmbeddedJtagTap = false,
withEmbeddedJtagInstruction = false,
withCoherency = true,
withRdTime = true
)Here the simulation command i ran :
TRANSLATED is the very critical one. the others are more relaxed.
On Tilelink, the idea is that a memory request should never be addressed to a non-existing peripheral.
?
So, it kinda the phylosophy of Tilelink to always have a exact knowledge of what the memory space is. What you may gain in the CPU by having "lazy" address checking, you may lose in the memory interconnect because there you would need to check the exact address decoding. Probably having exact checks in the CPU is at the end of the day more costly (ex multiple CPU => multiple check => more hardware), but, ho well, if you have multiple CPU, that cost isn't so big anymore compared to everything else ^^ One scenario where having exact knowledge in the CPU is usefull, is for instance if you have a big IO region ( a slow wishbone / APB3 bus), and in it, you have some small scratchpad that you want your CPU to know he is allowed to cache. (exact precise knowledge of what is allowed)
This may have ripple effects on poeple assuming it to exists. (<3 legacy <3)
Nice ^^ Using this PR ? |
|
oh sorry now I have more questions :D
In my opinion, the current definition of the memory_regions is a bit vague.
so then your
is in my opinion, currently broken because NaxRiscv/src/main/scala/naxriscv/misc/MmuPlugin.scala Lines 326 to 331 in 048b6c3 ACCESS_FAULT currently allows requests to the memory regions with mode "IO" and for litex that currently means the whole IO range including unmapped space (I have not tested it but I would guess that it currently still works because the tilelink interconnect will route it to the pbus and then the litex side will handle the unmapped requests somehow. So maybe I would like to improve the documentation here:
In my opinion it would be best to improve the definition of what the "IO" mode means or change the nax litex code in the same way vexii is doing it, so not adding the litex io-region. writing this I just realized the litex io definition is just a hardware-friendly way of describing cacheable allowed or not. (sorry for the long and maybe confusing post) |
Yes, it kinda always was a mess ^^
Right. VexiiRiscv is better then NaxRiscv :
So your solution not using isIo but instead isCachable is probably better. So here is one proposal for Gen.scala (the 0xF0010000l / 0xF0C00000l are for the clint / plic) import spinal.core._
import spinal.lib._
import naxriscv.compatibility._
import naxriscv.frontend._
import naxriscv.fetch._
import naxriscv.misc._
import naxriscv.execute._
import naxriscv.fetch._
import naxriscv.lsu._
import naxriscv.prediction._
import naxriscv.utilities._
import naxriscv.debug._
import naxriscv._
val memoryRegionsNoIo = memoryRegions.filter(!_.isIo) //Remove all IO specifications
def ioRange (address : UInt) : Bool = memoryRegionsNoIo.filter(!_.isCachable).map(_.mapping.hit(address)).orR || SizeMapping(0xF0010000l, 0x10000).hit(address) || SizeMapping(0xF0C00000l, 0x400000).hit(address)
def fetchRange (address : UInt) : Bool = memoryRegionsNoIo.filter(_.isExecutable).map(_.mapping.hit(address)).orR
def peripheralRange (address : UInt) : Bool = memoryRegionsNoIo.filter(_.onPeripheral).map(_.mapping.hit(address)).orR
def memoryRange (address : UInt) : Bool = memoryRegionsNoIo.filter(_.isCachable).map(_.mapping.hit(address)).orR
plugins ++= Config.plugins(
xlen = xlen,
ioRange = ioRange,
fetchRange = fetchRange,
memRange = memoryRange,
resetVector = resetVector,
aluCount = arg("alu-count", 2),
decodeCount = arg("decode-count", 2),
debugTriggers = 0,
withRvc = arg("rvc", false),
withLoadStore = true,
withMmu = arg("mmu", true),
withFloat = arg("rvf", false),
withDouble = arg("rvd", false),
withDebug = debug,
withEmbeddedJtagTap = false,
withEmbeddedJtagInstruction = false,
withCoherency = true,
withRdTime = true
)
Yes right
Not allowed
With that proposal for Gen.scala above, IO region will be ignored completly. (to avoid the big 0x80000000 litex IO region which would overlap other regions)
IO is often used to say manythings at once, so, not a very clear cut thing. (memory ordering preserved, cachable, with side-effects, ..)
Paine <3
no worries, this is a confusing topic in general ^^ |
|
@Dolu1990 I also noticed that even without Accessing (PLIC 0xf0c00000 0x400000) Right now, testing isn't much fun because of the submodule patches (#140). If there are no further objections, I'm happy to open the final pythondata-... PR. |
?? are you sure ?? i mean, i just tried, and it doesn't even boot for me without it. (maybe you had a NaxRiscv verilog cached) ? or missed the memoryRegionsNoIo ?
Yes, same :/
So, one thing, we can consider that NaxRiscv doesn't need to use RVLS upstream, and istead can use its own branch. To avoid different projects being coupled through RVLS main branch.
Yes, i'm confused aswell.
Sure ^^ |
|
@cklarhorst thank you for your help I supose I can close this topic now |
Issue : https://github.com/litex-hub/pythondata-cpu-naxriscv/issues/7
The MMU need to check if an address correspond to an existing endpoint.
!(memRange(TRANSLATED) || IO)doesn't cover the whole rangeReplace memRange by addressRange that cover all valid addresses.