Closed duanjiulon closed 7 months ago
Hi,
When you run the litex generation, in the log you can see for instance : cd /media/data2/proj/litex/pythondata-cpu-naxriscv/pythondata_cpu_naxriscv/verilog/ext/NaxRiscv && sbt "runMain naxriscv.platform.litex.NaxGen --netlist-name=NaxRiscvLitex_5486efeb6997eee4b65e18f65fec5cc3 --netlist-directory=/media/data2/proj/litex/pythondata-cpu-naxriscv/pythondata_cpu_naxriscv/verilog --reset-vector=0 --xlen=64 --cpu-count=1 --l2-bytes=131072 --l2-ways=8 --litedram-width=128 --memory-region=2147483648,2147483648,io,p --memory-region=0,131072,rxc,p --memory-region=268435456,8192,rwxc,p --memory-region=1073741824,536870912,rwxc,m --memory-region=2147483648,8192,rw,p --memory-region=3758096384,1048576,rw,p --memory-region=4026531840,65536,rw,p --scala-args=rvc=true,rvf=true,rvd=true --with-jtag-tap --with-debug --with-dma --scala-file=/media/data2/proj/litex/pythondata-cpu-naxriscv/pythondata_cpu_naxriscv/verilog/configs/gen.scala"
When you run the litex generation, in the log you can see for instance : cd /media/data2/proj/litex/pythondata-cpu-naxriscv/pythondata_cpu_naxriscv/verilog/ext/NaxRiscv && sbt "runMain naxriscv.platform.litex.NaxGen --netlist-name=NaxRiscvLitex_5486efeb6997eee4b65e18f65fec5cc3 --netlist-directory=/media/data2/proj/litex/pythondata-cpu-naxriscv/pythondata_cpu_naxriscv/verilog --reset-vector=0 --xlen=64 --cpu-count=1 --l2-bytes=131072 --l2-ways=8 --litedram-width=128 --memory-region=2147483648,2147483648,io,p --memory-region=0,131072,rxc,p --memory-region=268435456,8192,rwxc,p --memory-region=1073741824,536870912,rwxc,m --memory-region=2147483648,8192,rw,p --memory-region=3758096384,1048576,rw,p --memory-region=4026531840,65536,rw,p --scala-args=rvc=true,rvf=true,rvd=true --with-jtag-tap --with-debug --with-dma --scala-file=/media/data2/proj/litex/pythondata-cpu-naxriscv/pythondata_cpu_naxriscv/verilog/configs/gen.scala"
Hi,
When you run the litex generation, in the log you can see for instance : cd /media/data2/proj/litex/pythondata-cpu-naxriscv/pythondata_cpu_naxriscv/verilog/ext/NaxRiscv && sbt "runMain naxriscv.platform.litex.NaxGen --netlist-name=NaxRiscvLitex_5486efeb6997eee4b65e18f65fec5cc3 --netlist-directory=/media/data2/proj/litex/pythondata-cpu-naxriscv/pythondata_cpu_naxriscv/verilog --reset-vector=0 --xlen=64 --cpu-count=1 --l2-bytes=131072 --l2-ways=8 --litedram-width=128 --memory-region=2147483648,2147483648,io,p --memory-region=0,131072,rxc,p --memory-region=268435456,8192,rwxc,p --memory-region=1073741824,536870912,rwxc,m --memory-region=2147483648,8192,rw,p --memory-region=3758096384,1048576,rw,p --memory-region=4026531840,65536,rw,p --scala-args=rvc=true,rvf=true,rvd=true --with-jtag-tap --with-debug --with-dma --scala-file=/media/data2/proj/litex/pythondata-cpu-naxriscv/pythondata_cpu_naxriscv/verilog/configs/gen.scala"
Hi, I konw this, it is about litexto generate a nax_soc, this is all right for sbt, but I want to know how to use the argument for the litex.scala,for example, ca home/work1/pythondata-cpu-naxriscv/pythondata_cpu_naxriscv/verilog/ext/NaxRiscv$ sbt "runMain naxriscv.platform.LitexGen --with-jtag-tap --with-debug --scala-args=rvc=true --scala-file=/home/jlduan/work1/pythondata-cpu-naxriscv/pythondata_cpu_naxriscv/verilog/configs/gen.scala --scala-args=rvc=true". but the error is : [error] Exception in thread "main" java.lang.AssertionError: assertion failed [error] at scala.Predef$.assert(Predef.scala:156) [error] at spinal.core.package$.assert(core.scala:497) [error] at naxriscv.lsu.DataMemBus$$anon$13.(DataCache.scala:408) [error] at naxriscv.lsu.DataMemBus.toAxi4(DataCache.scala:407) [error] at naxriscv.lsu.DataCacheAxi4$$anonfun$1$$anon$1.(DataCacheAxi4.scala:16) [error] at naxriscv.lsu.DataCacheAxi4$$anonfun$1.apply(DataCacheAxi4.scala:13) [error] at naxriscv.lsu.DataCacheAxi4$$anonfun$1.apply(DataCacheAxi4.scala:13) I think the argument is not all right, Thank you for you reply!
Ahhhh, so the issue is that you probably are using the pythondata master branch, while the active branch was the smp one.
I just merged smp into master now. Should be good then.
But note, that without memory region specification, the SoC will not be able to access anything
Ahhhh, so the issue is that you probably are using the pythondata master branch, while the active branch was the smp one.
I just merged smp into master now. Should be good then.
But note, that without memory region specification, the SoC will not be able to access anything
Hi, So how can I use this litex.scala file to generate a V file? Yesterday, I tried to clone this folder again with Git and ran the command line, but I still got the same error as before. Why is this?thank you May I ask which branch is inside jlduan@dev-optiplex3060:~/work1/pythondata-cpu-naxriscv/pythondata_cpu_naxriscv/verilog/ext/NaxRiscv$ git branch -a
This works for me :
git clone https://github.com/SpinalHDL/NaxRiscv.git --recursive
git clone https://github.com/litex-hub/pythondata-cpu-naxriscv.git --branch master
cd NaxRiscv
sbt "runMain naxriscv.platform.litex.NaxGen --netlist-name=NaxRiscvLitex_5486efeb6997eee4b65e18f65fec5cc3 --netlist-directory=/media/data2/proj/litex/pythondata-cpu-naxriscv/pythondata_cpu_naxriscv/verilog --reset-vector=0 --xlen=64 --cpu-count=1 --l2-bytes=131072 --l2-ways=8 --litedram-width=128 --memory-region=2147483648,2147483648,io,p --memory-region=0,131072,rxc,p --memory-region=268435456,8192,rwxc,p --memory-region=1073741824,536870912,rwxc,m --memory-region=2147483648,8192,rw,p --memory-region=3758096384,1048576,rw,p --memory-region=4026531840,65536,rw,p --scala-args=rvc=true,rvf=true,rvd=true --with-jtag-tap --with-debug --with-dma --scala-file=../pythondata-cpu-naxriscv/pythondata_cpu_naxriscv/verilog/configs/gen.scala"
This works for me :
git clone https://github.com/SpinalHDL/NaxRiscv.git --recursive git clone https://github.com/litex-hub/pythondata-cpu-naxriscv.git --branch master cd NaxRiscv sbt "runMain naxriscv.platform.litex.NaxGen --netlist-name=NaxRiscvLitex_5486efeb6997eee4b65e18f65fec5cc3 --netlist-directory=/media/data2/proj/litex/pythondata-cpu-naxriscv/pythondata_cpu_naxriscv/verilog --reset-vector=0 --xlen=64 --cpu-count=1 --l2-bytes=131072 --l2-ways=8 --litedram-width=128 --memory-region=2147483648,2147483648,io,p --memory-region=0,131072,rxc,p --memory-region=268435456,8192,rwxc,p --memory-region=1073741824,536870912,rwxc,m --memory-region=2147483648,8192,rw,p --memory-region=3758096384,1048576,rw,p --memory-region=4026531840,65536,rw,p --scala-args=rvc=true,rvf=true,rvd=true --with-jtag-tap --with-debug --with-dma --scala-file=../pythondata-cpu-naxriscv/pythondata_cpu_naxriscv/verilog/configs/gen.scala"
Hi,Recently, I have been trying to develop an SOC with AXI4 interface using traditional SPinal HDL, but I have encountered some issues and hope you can help me take a look. Thank you
[error] Exception in thread "main" java.lang.IllegalArgumentException: requirement failed: You need to set idWidth
[error] at scala.Predef$.require(Predef.scala:224)
[error] at spinal.lib.bus.amba4.axi.Axi4Config.
the source: package naxriscv import naxriscv.compatibility. import naxriscv.debug.EmbeddedJtagPlugin import naxriscv.frontend. import naxriscv.fetch. import naxriscv.misc. import naxriscv.execute. import naxriscv.execute.fpu. import naxriscv.fetch. import naxriscv.fetch.FetchAxi4 import naxriscv.interfaces.CommitService import naxriscv.lsu. import naxriscv.lsu2.Lsu2Plugin import naxriscv.prediction. import naxriscv.riscv.IntRegFile import naxriscv.utilities.
import spinal.core. import spinal.lib. import spinal.lib.bus.amba3.apb. import spinal.lib.bus.amba4.axi. import spinal.lib.com.jtag.Jtag import spinal.lib.com.jtag.sim.JtagTcp import spinal.lib.system.debugger.{JtagAxi4SharedDebugger, JtagBridge, SystemDebugger, SystemDebuggerConfig}
import scala.collection.mutable.ArrayBuffer import scala.collection.Seq
class NaxSoc_2(plugins : ArrayBuffer[Plugin],toPeripheral : UInt => Bool) extends Component { val io = new Bundle { val debug = slave(Jtag()) val debug_resetn = in Bool() val asyncReset = in Bool() val axiClk = in Bool()
val Ddr3CtrlAxi = master(Axi4Shared(Axi4Config(
addressWidth = 30,
dataWidth = 32,
idWidth = 4,
useLock = false,
useRegion = false,
useCache = false,
useProt = false,
useQos = false
)))
val userApb3 = master(Apb3(Apb3Config(
addressWidth = 21,
dataWidth = 32
)))
val timerInterrupt = in Bool()
val softInterrupt = in Bool()
val xtnlInterrupt = in Bool()
} val resetCtrlClockDomain = ClockDomain( clock = io.axiClk, config = ClockDomainConfig( resetKind = BOOT ) ) val resetCtrl = new ClockingArea(resetCtrlClockDomain) { val systemResetUnbuffered = False
//Implement an counter to keep the reset axiResetOrder high 64 cycles
// Also this counter will automaticly do a reset when the system boot.
val systemResetCounter = Reg(UInt(6 bits)) init (0)
when(systemResetCounter =/= U(systemResetCounter.range -> true)) {
systemResetCounter := systemResetCounter + 1
systemResetUnbuffered := True
}
when(BufferCC(io.asyncReset)) {
systemResetCounter := 0
}
//Create all reset used later in the design
val systemReset = RegNext(systemResetUnbuffered)
val axiReset = RegNext(systemResetUnbuffered)
} ////////////////////////////////////// val axiFrequency = 50 MHz val onChipRamSize = 2 MB ////////////////////////////// val axiClockDomain = ClockDomain( clock = io.axiClk, reset = resetCtrl.axiReset, frequency = FixedFrequency(axiFrequency) //The frequency information is used by the SDRAM controller )
val debugClockDomain = ClockDomain( clock = io.axiClk, reset = resetCtrl.systemReset, frequency = FixedFrequency(axiFrequency) )
val axi = new ClockingArea(axiClockDomain) { val ram = Axi4SharedOnChipRam( dataWidth = 32, byteCount = onChipRamSize, idWidth = 4 )
val apbBridge = Axi4SharedToApb3Bridge(
addressWidth = 21,
dataWidth = 32,
idWidth = 2
)
apbBridge.io.apb <> io.userApb3
} //////////////////////////////// val ramDataWidth = 32 val ioDataWidth = 32 val memoryRegions = (0x80000000l, 0xA0000000l) // val toPeripheral : UInt => Bool plugins+= new FetchAxi4( ramDataWidth = ramDataWidth, ioDataWidth = ioDataWidth, toPeripheral = cmd => toPeripheral(cmd.address) // toPeripheral = 0xA0000000L ) plugins += new DataCacheAxi4( dataWidth = ramDataWidth ) plugins += new LsuPeripheralAxiLite4( ioDataWidth = ioDataWidth )
val cpu = new NaxRiscv( plugins ) /////////////////////////////////////// private def priv_port(cpu: NaxRiscv) = cpu.framework.getService[PrivilegedPlugin].io
def int_mtimer(cpu: NaxRiscv) = priv_port(cpu).int.machine.timer
def int_msoftware(cpu: NaxRiscv) = priv_port(cpu).int.machine.software
def int_mexternal(cpu: NaxRiscv) = priv_port(cpu).int.machine.external
def int_sexternal(cpu: NaxRiscv) = priv_port(cpu).int.supervisor.external
def int_rdtime(cpu: NaxRiscv) = priv_port(cpu).rdtime
def dbg_jtag(cpu: NaxRiscv) = cpu.framework.getService[EmbeddedJtagPlugin].logic.jtag
//FetchAxi4Parrot split a single axi4 to double axi4, modified from FetchAxi4 plugin // def inst_main(cpu: NaxRiscv) = cpu.framework.getService[FetchAxi4].logic.axiRam // // def data_ram(cpu: NaxRiscv) = cpu.framework.getService[DataCacheAxi4].logic.axi //////////////////////////////////////////////////////////////////////////////////////////////////////////// val ibus = cpu.framework.getService[FetchAxi4].logic.axiRam val dbus = cpu.framework.getService[DataCacheAxi4].logic.axi // val outputConfig = Axi4ReadOnlyArbiter.getInputConfig(Axi4Config(addressWidth = 30, dataWidth = 32, idWidth = 0), inputsCount = 2) // val arbiter = Axi4ReadOnlyArbiter(outputConfig, inputsCount = 2) val inputConfig = Axi4Config(addressWidth = 30, dataWidth = 32, idWidth = 0) val outputConfig = Axi4ReadOnlyArbiter.getInputConfig(inputConfig, inputsCount = 2) val configWithIdWidth = outputConfig.copy(idWidth = 1) // ÉèÖÃÕýÈ·µÄ idWidth Öµ val arbiter = Axi4ReadOnlyArbiter(configWithIdWidth, inputsCount = 2)
// val arbiter = Axi4ReadOnlyArbiter(dbus.config.copy(idWidth = (dbus.config.idWidth min ibus.config.idWidth) + 1), inputsCount = 2)//32 // val arbiter = Axi4ReadOnlyArbiter(outputConfig =Axi4Config( // addressWidth = 30, // dataWidth = 32, // idWidth = 0) , inputsCount = 2) // val arbiter = Axi4ReadOnlyArbiter(outputConfig = dbus.config.copy(idWidth = (dbus.config.idWidth)), inputsCount = 2)//32 arbiter.io.inputs(0) << ibus arbiter.io.inputs(1) << dbus.toReadOnly() // arbiter.io.inputs(1) << dbus
val bus = master(Axi4(arbiter.outputConfig))
bus << arbiter.io.output
bus << dbus.toWriteOnly()
Axi4SpecRenamer(bus)
/////////////////////////////////////////////////////////////////////////////////////////////////////////// //////////////// // val ibus = cpu.framework.getService[FetchAxi4].logic.axiRam.toIo() // val dbus = cpu.framework.getService[DataCacheAxi4].logic.axi.toIo() ////////////////////////////////// Axi4SpecRenamer(ibus) Axi4SpecRenamer(dbus) // io.debug.setName("jtag") // io.uart.setName("uart") io.debug_resetn.setName("jtag_rstn") val l = Config.plugins( withRdTime = false, aluCount = 2, decodeCount = 2, debugTriggers = 4, withDedicatedLoadAgu = false, withRvc = true, withLoadStore = true, withMmu = true, withDebug = true, withEmbeddedJtagTap = true, jtagTunneled = false, withFloat = false, withDouble = false, withLsu2 = true, lqSize = 16, sqSize = 16, // withCoherency = true, ioRange = a => a(31 downto 28) === 0x1 // || !a(12)//(a(5, 6 bits) ^ a(12, 6 bits)) === 51 ) l.foreach { case p: EmbeddedJtagPlugin => p.debugCd.load(ClockDomain.current.copy(reset = Bool().setName("debugreset"))) case => } // val corecfg = initCfg( // resetVector = 0x00000000L, // ioRange = (31 downto 28) === 0x2, // fetchRange = _(31 downto 28) =/= 0x2 // ) // core_cfg.foreach { // case p: EmbeddedJtagPlugin => p.debugCd.load(ClockDomain.current.copy(reset = io.debugresetn)) // case => // } val core = new NaxRiscv(l) // // Axi4SpecRenamer(inst_main(core)) // Axi4SpecRenamer(data_ram(core)) // Axi4SpecRenamer(inst_perp(core)) // io.debug <> dbg_jtag(core) // // //Just for test, set all INT to zero // int_mtimer(core).clear() // int_msoftware(core).clear() // int_mexternal(core).clear() // int_sexternal(core).clear() // int_rdtime(core).clearAll() // val ram = new Axi4SharedOnChipRam(dataWidth = 32, byteCount = 256, idWidth = 4, arwStage = true) val main_bus = Axi4CrossbarFactory() main_bus.addSlaves( axi.ram.io.axi -> (0x10000000L, 16 MiB), io.Ddr3CtrlAxi -> (0x40000000L, 1024 MB), axi.apbBridge.io.axi -> (0xF0200000L, 2 MB) ) noIoPrefix()
main_bus.addConnections( // inst_main(core) -> List(ram.io.axi, io.Ddr3CtrlAxi), // data_ram(core) -> List(ram.io.axi, io.Ddr3CtrlAxi, axi.apbBridge.io.axi) ibus -> List(axi.ram.io.axi, io.Ddr3CtrlAxi), dbus -> List(axi.ram.io.axi, io.Ddr3CtrlAxi, axi.apbBridge.io.axi) ) main_bus.build() // val perp_stor_bus = Axi4CrossbarFactory() // perp_stor_bus.addSlaves( // qspi_flash.io.inst_port -> (0x00000000L, 16 MiB) // ) // perp_stor_bus.addConnections( // inst_perp(core) -> List(qspi_flash.io.inst_port) // ) // perp_stor_bus.build() // // //A simple uart controller with apb bus // val uart = master(Uart()) // val uart_apb_master = Apb3(5, 32) // //Just change the datawidth from 32 to 8,nothing else // resizeConnect(slave = uart.io.apb, master = uart_apb_master)
// io.uart <> uart.io.uart // io.qspi <> qspi_flash.io.qspi //} }
object NaxSoc_2 extends App{ def plugins = { val l = Config.plugins( withRdTime = false, aluCount = 2, decodeCount = 2, debugTriggers = 4, withDedicatedLoadAgu = false, withRvc = true, withLoadStore = true, withMmu = true, withDebug = true, withEmbeddedJtagTap = true, jtagTunneled = false, withFloat = false, withDouble = false, withLsu2 = true, lqSize = 16, sqSize = 16, // withCoherency = true, ioRange = a => a(31 downto 28) === 0x1// || !a(12)//(a(5, 6 bits) ^ a(12, 6 bits)) === 51 ) l.foreach{ case p : EmbeddedJtagPlugin => p.debugCd.load(ClockDomain.current.copy(reset = Bool().setName("debugreset"))) case => } l } SpinalVerilog(new NaxSoc_2(plugins,address => False)) } I want to use axis crossbar to build an AXI interface and an APB interface for DDR3, but the configuration has been incorrect in terms of ID width (line 157), and I don't have any ideas now. Could you please take a look when you have time. thank you!
The code is completly mangled you need to send it between
```scala code here ```
The code is completly mangled you need to send it between
scala code here
package naxriscv
import naxriscv.compatibility._
import naxriscv.debug.EmbeddedJtagPlugin
import naxriscv.frontend._
import naxriscv.fetch._
import naxriscv.misc._
import naxriscv.execute._
import naxriscv.execute.fpu._
import naxriscv.fetch._
import naxriscv.fetch.FetchAxi4
import naxriscv.interfaces.CommitService
import naxriscv.lsu._
import naxriscv.lsu2.Lsu2Plugin
import naxriscv.prediction._
import naxriscv.riscv.IntRegFile
import naxriscv.utilities._
import spinal.core._
import spinal.lib._
import spinal.lib.bus.amba3.apb._
import spinal.lib.bus.amba4.axi._
import spinal.lib.com.jtag.Jtag
import spinal.lib.com.jtag.sim.JtagTcp
import spinal.lib.system.debugger.{JtagAxi4SharedDebugger, JtagBridge, SystemDebugger, SystemDebuggerConfig}
import scala.collection.mutable.ArrayBuffer
import scala.collection.Seq
class NaxSoc_2(plugins : ArrayBuffer[Plugin],toPeripheral : UInt => Bool) extends Component {
val io = new Bundle {
val debug = slave(Jtag())
val debug_resetn = in Bool()
val asyncReset = in Bool()
val axiClk = in Bool()
val Ddr3CtrlAxi = master(Axi4Shared(Axi4Config(
addressWidth = 30,
dataWidth = 32,
idWidth = 4,
useLock = false,
useRegion = false,
useCache = false,
useProt = false,
useQos = false
)))
val userApb3 = master(Apb3(Apb3Config(
addressWidth = 21,
dataWidth = 32
)))
val timerInterrupt = in Bool()
val softInterrupt = in Bool()
val xtnlInterrupt = in Bool()
}
val resetCtrlClockDomain = ClockDomain(
clock = io.axiClk,
config = ClockDomainConfig(
resetKind = BOOT
)
)
val resetCtrl = new ClockingArea(resetCtrlClockDomain) {
val systemResetUnbuffered = False
//Implement an counter to keep the reset axiResetOrder high 64 cycles
// Also this counter will automaticly do a reset when the system boot.
val systemResetCounter = Reg(UInt(6 bits)) init (0)
when(systemResetCounter =/= U(systemResetCounter.range -> true)) {
systemResetCounter := systemResetCounter + 1
systemResetUnbuffered := True
}
when(BufferCC(io.asyncReset)) {
systemResetCounter := 0
}
//Create all reset used later in the design
val systemReset = RegNext(systemResetUnbuffered)
val axiReset = RegNext(systemResetUnbuffered)
}
//////////////////////////////////////
val axiFrequency = 50 MHz
val onChipRamSize = 2 MB
//////////////////////////////
val axiClockDomain = ClockDomain(
clock = io.axiClk,
reset = resetCtrl.axiReset,
frequency = FixedFrequency(axiFrequency) //The frequency information is used by the SDRAM controller
)
val debugClockDomain = ClockDomain(
clock = io.axiClk,
reset = resetCtrl.systemReset,
frequency = FixedFrequency(axiFrequency)
)
val axi = new ClockingArea(axiClockDomain) {
val ram = Axi4SharedOnChipRam(
dataWidth = 32,
byteCount = onChipRamSize,
idWidth = 4
)
val apbBridge = Axi4SharedToApb3Bridge(
addressWidth = 21,
dataWidth = 32,
idWidth = 2
)
apbBridge.io.apb <> io.userApb3
}
////////////////////////////////
val ramDataWidth = 32
val ioDataWidth = 32
val memoryRegions = (0x80000000l, 0xA0000000l)
// val toPeripheral : UInt => Bool
plugins+= new FetchAxi4(
ramDataWidth = ramDataWidth,
ioDataWidth = ioDataWidth,
toPeripheral = cmd => toPeripheral(cmd.address)
// toPeripheral = 0xA0000000L
)
plugins += new DataCacheAxi4(
dataWidth = ramDataWidth
)
plugins += new LsuPeripheralAxiLite4(
ioDataWidth = ioDataWidth
)
val cpu = new NaxRiscv(
plugins
)
///////////////////////////////////////
private def priv_port(cpu: NaxRiscv) = cpu.framework.getService[PrivilegedPlugin].io
def int_mtimer(cpu: NaxRiscv) = priv_port(cpu).int.machine.timer
def int_msoftware(cpu: NaxRiscv) = priv_port(cpu).int.machine.software
def int_mexternal(cpu: NaxRiscv) = priv_port(cpu).int.machine.external
def int_sexternal(cpu: NaxRiscv) = priv_port(cpu).int.supervisor.external
def int_rdtime(cpu: NaxRiscv) = priv_port(cpu).rdtime
def dbg_jtag(cpu: NaxRiscv) = cpu.framework.getService[EmbeddedJtagPlugin].logic.jtag
//FetchAxi4Parrot split a single axi4 to double axi4, modified from FetchAxi4 plugin
// def inst_main(cpu: NaxRiscv) = cpu.framework.getService[FetchAxi4].logic.axiRam
//
// def data_ram(cpu: NaxRiscv) = cpu.framework.getService[DataCacheAxi4].logic.axi
////////////////////////////////////////////////////////////////////////////////////////////////////////////
val ibus = cpu.framework.getService[FetchAxi4].logic.axiRam
val dbus = cpu.framework.getService[DataCacheAxi4].logic.axi
// val outputConfig = Axi4ReadOnlyArbiter.getInputConfig(Axi4Config(addressWidth = 30, dataWidth = 32, idWidth = 0), inputsCount = 2)
// val arbiter = Axi4ReadOnlyArbiter(outputConfig, inputsCount = 2)
val inputConfig = Axi4Config(addressWidth = 30, dataWidth = 32, idWidth = 0)
val outputConfig = Axi4ReadOnlyArbiter.getInputConfig(inputConfig, inputsCount = 2)
val configWithIdWidth = outputConfig.copy(idWidth = 1) // ÉèÖÃÕýÈ·µÄ idWidth Öµ
val arbiter = Axi4ReadOnlyArbiter(configWithIdWidth, inputsCount = 2)
// val arbiter = Axi4ReadOnlyArbiter(dbus.config.copy(idWidth = (dbus.config.idWidth min ibus.config.idWidth) + 1), inputsCount = 2)//32
// val arbiter = Axi4ReadOnlyArbiter(outputConfig =Axi4Config(
// addressWidth = 30,
// dataWidth = 32,
// idWidth = 0) , inputsCount = 2)
// val arbiter = Axi4ReadOnlyArbiter(outputConfig = dbus.config.copy(idWidth = (dbus.config.idWidth)), inputsCount = 2)//32
arbiter.io.inputs(0) << ibus
arbiter.io.inputs(1) << dbus.toReadOnly()
// arbiter.io.inputs(1) << dbus
val bus = master(Axi4(arbiter.outputConfig))
bus << arbiter.io.output
bus << dbus.toWriteOnly()
Axi4SpecRenamer(bus)
///////////////////////////////////////////////////////////////////////////////////////////////////////////
////////////////
// val ibus = cpu.framework.getService[FetchAxi4].logic.axiRam.toIo()
// val dbus = cpu.framework.getService[DataCacheAxi4].logic.axi.toIo()
//////////////////////////////////
Axi4SpecRenamer(ibus)
Axi4SpecRenamer(dbus)
//
io.debug.setName("jtag")
// io.uart.setName("uart")
io.debug_resetn.setName("jtag_rstn")
val l = Config.plugins(
withRdTime = false,
aluCount = 2,
decodeCount = 2,
debugTriggers = 4,
withDedicatedLoadAgu = false,
withRvc = true,
withLoadStore = true,
withMmu = true,
withDebug = true,
withEmbeddedJtagTap = true,
jtagTunneled = false,
withFloat = false,
withDouble = false,
withLsu2 = true,
lqSize = 16,
sqSize = 16,
// withCoherency = true,
ioRange = a => a(31 downto 28) === 0x1 // || !a(12)//(a(5, 6 bits) ^ a(12, 6 bits)) === 51
)
l.foreach {
case p: EmbeddedJtagPlugin => p.debugCd.load(ClockDomain.current.copy(reset = Bool().setName("debug_reset")))
case _ =>
}
// val core_cfg = initCfg(
// resetVector = 0x00000000L,
// ioRange = _(31 downto 28) === 0x2,
// fetchRange = _(31 downto 28) =/= 0x2
// )
// core_cfg.foreach {
// case p: EmbeddedJtagPlugin => p.debugCd.load(ClockDomain.current.copy(reset = io.debug_resetn))
// case _ =>
// }
val core = new NaxRiscv(l)
//
// Axi4SpecRenamer(inst_main(core))
// Axi4SpecRenamer(data_ram(core))
// Axi4SpecRenamer(inst_perp(core))
//
io.debug <> dbg_jtag(core)
//
// //Just for test, set all INT to zero
// int_mtimer(core).clear()
// int_msoftware(core).clear()
// int_mexternal(core).clear()
// int_sexternal(core).clear()
// int_rdtime(core).clearAll()
// val ram = new Axi4SharedOnChipRam(dataWidth = 32, byteCount = 256, idWidth = 4, arwStage = true)
val main_bus = Axi4CrossbarFactory()
main_bus.addSlaves(
axi.ram.io.axi -> (0x10000000L, 16 MiB),
io.Ddr3CtrlAxi -> (0x40000000L, 1024 MB),
axi.apbBridge.io.axi -> (0xF0200000L, 2 MB)
)
noIoPrefix()
main_bus.addConnections(
// inst_main(core) -> List(ram.io.axi, io.Ddr3CtrlAxi),
// data_ram(core) -> List(ram.io.axi, io.Ddr3CtrlAxi, axi.apbBridge.io.axi)
ibus -> List(axi.ram.io.axi, io.Ddr3CtrlAxi),
dbus -> List(axi.ram.io.axi, io.Ddr3CtrlAxi, axi.apbBridge.io.axi)
)
main_bus.build()
// val perp_stor_bus = Axi4CrossbarFactory()
// perp_stor_bus.addSlaves(
// qspi_flash.io.inst_port -> (0x00000000L, 16 MiB)
// )
// perp_stor_bus.addConnections(
// inst_perp(core) -> List(qspi_flash.io.inst_port)
// )
// perp_stor_bus.build()
//
// //A simple uart controller with apb bus
// val uart = master(Uart())
// val uart_apb_master = Apb3(5, 32)
// //Just change the datawidth from 32 to 8,nothing else
// resizeConnect(slave = uart.io.apb, master = uart_apb_master)
// io.uart <> uart.io.uart
// io.qspi <> qspi_flash.io.qspi
//}
}
object NaxSoc_2 extends App{
def plugins = {
val l = Config.plugins(
withRdTime = false,
aluCount = 2,
decodeCount = 2,
debugTriggers = 4,
withDedicatedLoadAgu = false,
withRvc = true,
withLoadStore = true,
withMmu = true,
withDebug = true,
withEmbeddedJtagTap = true,
jtagTunneled = false,
withFloat = false,
withDouble = false,
withLsu2 = true,
lqSize = 16,
sqSize = 16,
// withCoherency = true,
ioRange = a => a(31 downto 28) === 0x1// || !a(12)//(a(5, 6 bits) ^ a(12, 6 bits)) === 51
)
l.foreach{
case p : EmbeddedJtagPlugin => p.debugCd.load(ClockDomain.current.copy(reset = Bool().setName("debug_reset")))
case _ =>
}
l
}
SpinalVerilog(new NaxSoc_2(plugins,address => False))
}
ahh,I am so sorry about that
It is all related to the axi id width
Got it to not complain about it via :
val inputConfig = Axi4Config(addressWidth = 30, dataWidth = 32, idWidth = 4)
val outputConfig = Axi4ReadOnlyArbiter.getInputConfig(inputConfig, inputsCount = 2)
val configWithIdWidth = outputConfig.copy(idWidth = 2) // ÉèÖÃÕýÈ·µ
That will lead you to other remaning issues, as data width missmatches on buses
so how can I deal with the problem,if I want to make the idwidth math,for example,it is also this problem:
val arbiter = Axi4ReadOnlyArbiter(dbus.config.copy(idWidth = (dbus.config.idWidth min ibus.config.idWidth) + 1), inputsCount = 2)//32
According to the system error, I already know that the problem lies in this line of statement, but now I don't know how to solve this problem. Can you please provide some suggestions。
发自我的iPhone
------------------ Original ------------------ From: Dolu1990 @.> Date: Tue,Mar 19,2024 9:56 PM To: SpinalHDL/NaxRiscv @.> Cc: duanjiulon @.>, Author @.> Subject: Re: [SpinalHDL/NaxRiscv] Hi,can you privide an example? (Issue #82)
It is all related to the axi id width
Got it to not complain about it via : val inputConfig = Axi4Config(addressWidth = 30, dataWidth = 32, idWidth = 4) val outputConfig = Axi4ReadOnlyArbiter.getInputConfig(inputConfig, inputsCount = 2) val configWithIdWidth = outputConfig.copy(idWidth = 2) // ÉèÖÃÕýÈ·µ
That will lead you to other remaning issues, as data width missmatches on buses
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>
It is all related to the axi id width
Got it to not complain about it via :
val inputConfig = Axi4Config(addressWidth = 30, dataWidth = 32, idWidth = 4) val outputConfig = Axi4ReadOnlyArbiter.getInputConfig(inputConfig, inputsCount = 2) val configWithIdWidth = outputConfig.copy(idWidth = 2) // ÉèÖÃÕýÈ·µ
That will lead you to other remaning issues, as data width missmatches on buses
so how can I deal with the problem,if I want to make the idwidth math,for example,it is also this problem:
val arbiter = Axi4ReadOnlyArbiter(dbus.config.copy(idWidth = (dbus.config.idWidth min ibus.config.idWidth) + 1), inputsCount = 2)//32
According to the system error, I already know that the problem lies in this line of statement, but now I don't know how to solve this problem. Can you please provide some suggestions。
It is all related to the axi id width Got it to not complain about it via :
val inputConfig = Axi4Config(addressWidth = 30, dataWidth = 32, idWidth = 4) val outputConfig = Axi4ReadOnlyArbiter.getInputConfig(inputConfig, inputsCount = 2) val configWithIdWidth = outputConfig.copy(idWidth = 2) // ÉèÖÃÕýÈ·µ
That will lead you to other remaning issues, as data width missmatches on buses
so how can I deal with the problem,if I want to make the idwidth math,for example,it is also this problem:
val arbiter = Axi4ReadOnlyArbiter(dbus.config.copy(idWidth = (dbus.config.idWidth min ibus.config.idWidth) + 1), inputsCount = 2)//32
According to the system error, I already know that the problem lies in this line of statement, but now I don't know how to solve this problem. Can you please provide some suggestions。
It is all related to the axi id width
Got it to not complain about it via :
val inputConfig = Axi4Config(addressWidth = 30, dataWidth = 32, idWidth = 4) val outputConfig = Axi4ReadOnlyArbiter.getInputConfig(inputConfig, inputsCount = 2) val configWithIdWidth = outputConfig.copy(idWidth = 2) // ÉèÖÃÕýÈ·µ
That will lead you to other remaning issues, as data width missmatches on buses
Hi,I have applied your suggestions to my program, but currently there is another mismatch in bit width issue:
[error] Exception in thread "main" spinal.core.SpinalExit:
[error] Error detected in phase PhaseNormalizeNodeInputs
[error] ********************************************************************************
[error] ********************************************************************************
[error] WIDTH MISMATCH (64 bits <- 32 bits) on (toplevel/cpu/FetchAxi4_logic_axiRam_rdata : in Bits[64 bits]) := (toplevel/arbiter/io_inputs_0_r_payload_data : out Bits[32 bits]) at
[error] spinal.lib.bus.amba4.axi.Axi4R$StreamPimper.drive(Axi4Channel.scala:467)
[error] spinal.lib.bus.amba4.axi.Axi4ReadOnly.$greater$greater(Axi4ReadOnly.scala:23)
[error] spinal.lib.bus.amba4.axi.Axi4ReadOnly.$less$less(Axi4ReadOnly.scala:20)
[error] naxriscv.NaxSoc_2.<init>(NAX_SOC.scala:165)
[error] naxriscv.NaxSoc_2$$anonfun$9.apply(NAX_SOC.scala:297)
[error] naxriscv.NaxSoc_2$$anonfun$9.apply(NAX_SOC.scala:297)
[error] spinal.sim.JvmThread.run(SimManager.scala:51)
[error] ********************************************************************************
[error] ********************************************************************************
[error] WIDTH MISMATCH (64 bits <- 32 bits) on (toplevel/??? : Bits[64 bits]) := (toplevel/Ddr3CtrlAxi_arbiter/io_readInputs_0_r_payload_data : out Bits[32 bits]) at
[error] spinal.lib.bus.amba4.axi.Axi4R$StreamPimper.drive(Axi4Channel.scala:467)
[error] spinal.lib.bus.amba4.axi.Axi4ReadOnly.$greater$greater(Axi4ReadOnly.scala:23)
[error] spinal.lib.bus.amba4.axi.Axi4ReadOnly.$less$less(Axi4ReadOnly.scala:20)
[error] spinal.lib.bus.amba4.axi.Axi4CrossbarFactory$$anonfun$29$$anon$7$$anonfun$40.apply(Axi4Crossbar.scala:271)
[error] spinal.lib.bus.amba4.axi.Axi4CrossbarFactory$$anonfun$29$$anon$7$$anonfun$40.apply(Axi4Crossbar.scala:270)
[error] spinal.lib.bus.amba4.axi.Axi4CrossbarFactory$$anonfun$29$$anon$7.<init>(Axi4Crossbar.scala:270)
[error] spinal.lib.bus.amba4.axi.Axi4CrossbarFactory$$anonfun$29.apply(Axi4Crossbar.scala:260)
[error] spinal.lib.bus.amba4.axi.Axi4CrossbarFactory$$anonfun$29.apply(Axi4Crossbar.scala:194)
[error] spinal.lib.bus.amba4.axi.Axi4CrossbarFactory.build(Axi4Crossbar.scala:194)
[error] naxriscv.NaxSoc_2.<init>(NAX_SOC.scala:247)
[error] naxriscv.NaxSoc_2$$anonfun$9.apply(NAX_SOC.scala:297)
[error] naxriscv.NaxSoc_2$$anonfun$9.apply(NAX_SOC.scala:297)
[error] spinal.sim.JvmThread.run(SimManager.scala:51)
[error] ********************************************************************************
[error] ********************************************************************************
[error] WIDTH MISMATCH (64 bits <- 32 bits) on (toplevel/??? : Bits[64 bits]) := (toplevel/axi_ram_io_axi_arbiter/io_readInputs_0_r_payload_data : out Bits[32 bits]) at
[error] spinal.lib.bus.amba4.axi.Axi4R$StreamPimper.drive(Axi4Channel.scala:467)
[error] spinal.lib.bus.amba4.axi.Axi4ReadOnly.$greater$greater(Axi4ReadOnly.scala:23)
[error] spinal.lib.bus.amba4.axi.Axi4ReadOnly.$less$less(Axi4ReadOnly.scala:20)
[error] spinal.lib.bus.amba4.axi.Axi4CrossbarFactory$$anonfun$29$$anon$7$$anonfun$40.apply(Axi4Crossbar.scala:271)
[error] spinal.lib.bus.amba4.axi.Axi4CrossbarFactory$$anonfun$29$$anon$7$$anonfun$40.apply(Axi4Crossbar.scala:270)
[error] spinal.lib.bus.amba4.axi.Axi4CrossbarFactory$$anonfun$29$$anon$7.<init>(Axi4Crossbar.scala:270)
[error] spinal.lib.bus.amba4.axi.Axi4CrossbarFactory$$anonfun$29.apply(Axi4Crossbar.scala:260)
[error] spinal.lib.bus.amba4.axi.Axi4CrossbarFactory$$anonfun$29.apply(Axi4Crossbar.scala:194)
[error] spinal.lib.bus.amba4.axi.Axi4CrossbarFactory.build(Axi4Crossbar.scala:194)
[error] naxriscv.NaxSoc_2.<init>(NAX_SOC.scala:247)
[error] naxriscv.NaxSoc_2$$anonfun$9.apply(NAX_SOC.scala:297)
[error] naxriscv.NaxSoc_2$$anonfun$9.apply(NAX_SOC.scala:297)
[error] spinal.sim.JvmThread.run(SimManager.scala:51)
[error] ********************************************************************************
[error] ********************************************************************************
[error] Design's errors are listed above.
[error] SpinalHDL compiler exit stack :
[error] at spinal.core.SpinalExit$.apply(Misc.scala:446)
[error] at spinal.core.SpinalError$.apply(Misc.scala:501)
[error] at spinal.core.internals.PhaseContext.checkPendingErrors(Phase.scala:177)
[error] at spinal.core.internals.PhaseContext.doPhase(Phase.scala:193)
[error] at spinal.core.internals.SpinalVerilogBoot$$anonfun$singleShot$2$$anonfun$apply$142.apply(Phase.scala:2926)
[error] at spinal.core.internals.SpinalVerilogBoot$$anonfun$singleShot$2$$anonfun$apply$142.apply(Phase.scala:2924)
[error] at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
[error] at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
[error] at spinal.core.internals.SpinalVerilogBoot$$anonfun$singleShot$2.apply(Phase.scala:2924)
[error] at spinal.core.internals.SpinalVerilogBoot$$anonfun$singleShot$2.apply(Phase.scala:2860)
[error] at spinal.core.ScopeProperty$.sandbox(ScopeProperty.scala:71)
[error] at spinal.core.internals.SpinalVerilogBoot$.singleShot(Phase.scala:2860)
[error] at spinal.core.internals.SpinalVerilogBoot$.apply(Phase.scala:2855)
[error] at spinal.core.Spinal$.apply(Spinal.scala:412)
[error] at spinal.core.SpinalConfig.generate(Spinal.scala:176)
[error] at spinal.core.SpinalVerilog$.apply(Spinal.scala:431)
[error] at naxriscv.NaxSoc_2$.delayedEndpoint$naxriscv$NaxSoc_2$1(NAX_SOC.scala:297)
[error] at naxriscv.NaxSoc_2$delayedInit$body.apply(NAX_SOC.scala:269)
[error] at scala.Function0$class.apply$mcV$sp(Function0.scala:34)
[error] at scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12)
[error] at scala.App$$anonfun$main$1.apply(App.scala:76)
[error] at scala.App$$anonfun$main$1.apply(App.scala:76)
[error] at scala.collection.immutable.List.foreach(List.scala:392)
[error] at scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:35)
[error] at scala.App$class.main(App.scala:76)
[error] at naxriscv.NaxSoc_2$.main(NAX_SOC.scala:269)
[error] at naxriscv.NaxSoc_2.main(NAX_SOC.scala)
[error] Nonzero exit code returned from runner: 1
[error] (Compile / runMain) Nonzero exit code returned from runner: 1
Right, that is because you have some AXI data width missmatch. NaxRiscv use 64 bits axi
Right, that is because you have some AXI data width missmatch. NaxRiscv use 64 bits axi
Can it only be 64 bits? But I used Gen. scala, DatacacheAXI4plugin, and Fechcacheplugin to generate a NAXRISCV core with a data bit width that can be set to 32 bits. Here are his input and output:
/////////////
input wire PrivilegedPlugin_io_int_machine_timer /* verilator public */ ,
input wire PrivilegedPlugin_io_int_machine_software /* verilator public */ ,
input wire PrivilegedPlugin_io_int_machine_external /* verilator public */ ,
input wire PrivilegedPlugin_io_int_supervisor_external /* verilator public */ ,
input wire [63:0] PrivilegedPlugin_io_rdtime,
input wire EmbeddedJtagPlugin_logic_jtag_tms,
input wire EmbeddedJtagPlugin_logic_jtag_tdi,
output wire EmbeddedJtagPlugin_logic_jtag_tdo,
input wire EmbeddedJtagPlugin_logic_jtag_tck,
output wire EmbeddedJtagPlugin_logic_ndmreset,
output wire FetchAxi4_logic_axiRam_arvalid,
input wire FetchAxi4_logic_axiRam_arready,
output wire [31:0] FetchAxi4_logic_axiRam_araddr,
output wire [7:0] FetchAxi4_logic_axiRam_arlen,
output wire [2:0] FetchAxi4_logic_axiRam_arsize,
output wire [1:0] FetchAxi4_logic_axiRam_arburst,
output wire [2:0] FetchAxi4_logic_axiRam_arprot,
input wire FetchAxi4_logic_axiRam_rvalid,
output wire FetchAxi4_logic_axiRam_rready,
input wire [31:0] FetchAxi4_logic_axiRam_rdata,
input wire [1:0] FetchAxi4_logic_axiRam_rresp,
input wire FetchAxi4_logic_axiRam_rlast,
output wire FetchAxi4_logic_axiPeripheral_arvalid,
input wire FetchAxi4_logic_axiPeripheral_arready,
output wire [31:0] FetchAxi4_logic_axiPeripheral_araddr,
output wire [2:0] FetchAxi4_logic_axiPeripheral_arprot,
input wire FetchAxi4_logic_axiPeripheral_rvalid,
output wire FetchAxi4_logic_axiPeripheral_rready,
input wire [31:0] FetchAxi4_logic_axiPeripheral_rdata,
input wire [1:0] FetchAxi4_logic_axiPeripheral_rresp,
output wire LsuPlugin_peripheralBus_cmd_valid /* verilator public */ ,
input wire LsuPlugin_peripheralBus_cmd_ready /* verilator public */ ,
output wire LsuPlugin_peripheralBus_cmd_payload_write /* verilator public */ ,
output wire [31:0] LsuPlugin_peripheralBus_cmd_payload_address /* verilator public */ ,
output wire [31:0] LsuPlugin_peripheralBus_cmd_payload_data /* verilator public */ ,
output wire [3:0] LsuPlugin_peripheralBus_cmd_payload_mask /* verilator public */ ,
output wire [1:0] LsuPlugin_peripheralBus_cmd_payload_size /* verilator public */ ,
input wire LsuPlugin_peripheralBus_rsp_valid /* verilator public */ ,
input wire LsuPlugin_peripheralBus_rsp_payload_error /* verilator public */ ,
input wire [31:0] LsuPlugin_peripheralBus_rsp_payload_data /* verilator public */ ,
output wire DataCacheAxi4_logic_axi_awvalid,
input wire DataCacheAxi4_logic_axi_awready,
output wire [31:0] DataCacheAxi4_logic_axi_awaddr,
output wire [0:0] DataCacheAxi4_logic_axi_awid,
output wire [7:0] DataCacheAxi4_logic_axi_awlen,
output wire [2:0] DataCacheAxi4_logic_axi_awsize,
output wire [1:0] DataCacheAxi4_logic_axi_awburst,
output wire [2:0] DataCacheAxi4_logic_axi_awprot,
output wire DataCacheAxi4_logic_axi_wvalid,
input wire DataCacheAxi4_logic_axi_wready,
output wire [31:0] DataCacheAxi4_logic_axi_wdata,
output wire [3:0] DataCacheAxi4_logic_axi_wstrb,
output wire DataCacheAxi4_logic_axi_wlast,
input wire DataCacheAxi4_logic_axi_bvalid,
output wire DataCacheAxi4_logic_axi_bready,
input wire [0:0] DataCacheAxi4_logic_axi_bid,
input wire [1:0] DataCacheAxi4_logic_axi_bresp,
output wire DataCacheAxi4_logic_axi_arvalid,
input wire DataCacheAxi4_logic_axi_arready,
output wire [31:0] DataCacheAxi4_logic_axi_araddr,
output wire [0:0] DataCacheAxi4_logic_axi_arid,
output wire [7:0] DataCacheAxi4_logic_axi_arlen,
output wire [2:0] DataCacheAxi4_logic_axi_arsize,
output wire [1:0] DataCacheAxi4_logic_axi_arburst,
output wire [2:0] DataCacheAxi4_logic_axi_arprot,
input wire DataCacheAxi4_logic_axi_rvalid,
output wire DataCacheAxi4_logic_axi_rready,
input wire [31:0] DataCacheAxi4_logic_axi_rdata,
input wire [0:0] DataCacheAxi4_logic_axi_rid,
input wire [1:0] DataCacheAxi4_logic_axi_rresp,
input wire DataCacheAxi4_logic_axi_rlast,
input wire reset,
input wire clk,
input wire debug_reset
);
on the Gen.scala above, The original plugins have not been changed, I have added the code :
plugins += new FetchAxi4(
ramDataWidth = 32,
ioDataWidth = 32,
// toPeripheral = cmd => toPeripheral(cmd.address)
toPeripheral = address => True
// toPeripheral = 0xA0000000L
)
plugins += new DataCacheAxi4(
dataWidth = 32
)
Is there anything I need to pay attention to when using this core to read and write DDR? I have already integrated the core into the original SOC system, but currently OpenOCD is unable to successfully detect hart, and I don't know what happened :(
Is there anything I need to pay attention to when using this core to read and write DDR? I have already integrated the core into the original SOC system, but currently OpenOCD is unable to successfully detect hart, and I don't know what happened :( the report are as follows:
Open On-Chip Debugger 0.11.0+dev-01873-g402df9ba8-dirty (2022-01-11-07:32) Licensed under GNU GPL v2 For bug reports, read http://openocd.org/doc/doxygen/bugs.html Info : libusb_open() failed with LIBUSB_ERROR_NOT_FOUND Info : Using libusb driver Info : clock speed 1000 kHz Info : JTAG tap: riscv.cpu tap/device found: 0x10002fff (mfg: 0x7ff (<invalid>), part: 0x0002, ver: 0x1) Info : datacount=1 progbufsize=2 Info : Disabling abstract command reads from CSRs. Error: Timed out after 2s waiting for busy to go low (abstractcs=0x2001001). Increase the timeout with riscv set_command_timeout_sec. Error: Fatal: Failed to read MISA from hart 0. Warn : target riscv.cpu examination failed Info : starting gdb server for riscv.cpu on 3333 Info : Listening on port 3333 for gdb connections Info : JTAG tap: riscv.cpu tap/device found: 0x10002fff (mfg: 0x7ff (<invalid>), part: 0x0002, ver: 0x1) Info : datacount=1 progbufsize=2 Error: Hart 0 doesn't exist. Error: Abstract command ended in error 'busy' (abstractcs=0x2001101) Error: Timed out after 2s waiting for busy to go low (abstractcs=0x2001101). Increase the timeout with riscv set_command_timeout_sec. Error: Abstract command ended in error 'busy' (abstractcs=0x2001101) Error: Timed out after 2s waiting for busy to go low (abstractcs=0x2001101). Increase the timeout with riscv set_command_timeout_sec.
Error: Fatal: Failed to read MISA from hart 0.
Did you used the embedded jtag plugin ?
One possibility is that the CPU is stuck waiting on some lost memory transactions => freeze to death.
Can it only be 64 bits?
Doesn't have to be, but that's the default, right you can change this
Did you used the embedded jtag plugin ?
I have used jtag and openocd to debug the read and write DDR of the vex—core, and this time when I replaced the vex—core with this nax—corel for debugging, this issue occurred.
One possibility is that the CPU is stuck waiting on some lost memory transactions => freeze to death.
May I ask how to troubleshoot this issue,thank you so much
By running a simulations.
Did you tried to run a simulation of the SOC running some preloaded software ?
By running a simulations.
Did you tried to run a simulation of the SOC running some preloaded software ?
ok, fine ,dear Dolu, thank you very much,but I still have some problems about the litex: usage: {'description': 'LiteX SoC on Tang Primer.'} [-h] [--toolchain {td}] [--build] [--load] [--log-filename LOG_FILENAME] [--log-level LOG_LEVEL] [--flash] [--sys-clk-freq SYS_CLK_FREQ] [--output-dir OUTPUT_DIR] [--gateware-dir GATEWARE_DIR] [--software-dir SOFTWARE_DIR] [--include-dir INCLUDE_DIR] [--generated-dir GENERATED_DIR] [--build-backend BUILD_BACKEND] [--no-compile] [--no-compile-software] [--no-compile-gateware] [--soc-csv SOC_CSV] [--soc-json SOC_JSON] [--soc-svd SOC_SVD] [--memory-x MEMORY_X] [--doc] [--bios-lto] [--bios-format {integer,float,double}] [--bios-console {full,no-history,no-autocomplete,lite,disable}] [--bus-standard BUS_STANDARD] [--bus-data-width BUS_DATA_WIDTH] [--bus-address-width BUS_ADDRESS_WIDTH] [--bus-timeout BUS_TIMEOUT] [--bus-bursting] [--bus-interconnect BUS_INTERCONNECT] [--cpu-type CPU_TYPE] [--cpu-variant CPU_VARIANT] [--cpu-reset-address CPU_RESET_ADDRESS] [--cpu-cfu CPU_CFU] [--no-ctrl] [--integrated-rom-size INTEGRATED_ROM_SIZE] [--integrated-rom-init INTEGRATED_ROM_INIT] [--integrated-sram-size INTEGRATED_SRAM_SIZE] [--integrated-main-ram-size INTEGRATED_MAIN_RAM_SIZE] [--csr-data-width CSR_DATA_WIDTH] [--csr-address-width CSR_ADDRESS_WIDTH] [--csr-paging CSR_PAGING] [--csr-ordering CSR_ORDERING] [--ident IDENT] [--no-ident-version] [--no-uart] [--uart-name UART_NAME] [--uart-baudrate UART_BAUDRATE] [--uart-fifo-depth UART_FIFO_DEPTH] [--with-uartbone] [--with-jtagbone] [--jtagbone-chain JTAGBONE_CHAIN] [--no-timer] [--timer-uptime] [--l2-size L2_SIZE] [--scala-file SCALA_FILE] [--scala-args SCALA_ARGS] [--xlen XLEN] [--cpu-count CPU_COUNT] [--with-coherent-dma] [--with-jtag-tap] [--with-jtag-instruction] [--update-repo {latest,wipe+latest,recommended,wipe+recommended,no}] [--no-netlist-cache] [--with-fpu] [--with-rvc] [--l2-bytes L2_BYTES] [--l2-ways L2_WAYS]
I want to know how to use the agrument --memory-x,if I want to modify the starting address of SRAM,thank again!
--memory-region=baseAddress,size,rwxc,m where :
--memory-region=baseAddress,size,rwxc,m where :
No,No,No,I mean the agrument --memory-x, I have done this and the erros are as follows: python3.7 /home/jlduan/work1/litex-boards/litex_boards/targets/sipeed_tang_primer.py --cpu-type=naxriscv --bus-standard axi-lite --sys-clk-freq 50 000000 --with-jtag-tap --with-rvc --build --cpu-variant=standard --integrated-sram-size 65536 --memory-region= 0x40000000,65536,rwxc,m {'description': 'LiteX SoC on Tang Primer.'}: error: unrecognized arguments: --memory-region= 0x40000000,65536,rwxc,m
--memory-region= 0x40000000,65536,rwxc,m
i can see a space after the =, could it be the reason ?
Ahhhhh , i do not know about memory-x, that isnt from SpinalHDL but from litex itself, no idea how it work
Ahhhhh , i do not know about memory-x, that isnt from SpinalHDL but from litex itself, no idea how it work
ahhh,I guess this parameter is related to the memory region in Scala, but even if I look at the py code, I don't know how to reference it. Even if the parameter is passed incorrectly, there will be no error message.
Ahhhhh , i do not know about memory-x, that isnt from SpinalHDL but from litex itself, no idea how it work
My current difficulty is generating a Nax SOC using Litex and successfully loading Drysone.elf in SRAM, but the core has been unable to run and is still running.5555555555~
Ahhhhh , i do not know about memory-x, that isnt from SpinalHDL but from litex itself, no idea how it work
Hi, Why is the default address width of Mbus 16 bits? Which line of scala code can modify the address width of MBus.
I have another question, which is that after loading the program with jtag and GDB, there will be abnormal PC register jumps. May I ask what is the reason for this:
tvec (/32): 0x00000000
reg mtvec mtvec (/32): 0x00000000
reset halt JTAG tap: riscv.cpu tap/device found: 0x10002fff (mfg: 0x7ff (
), part: 0x0002, ver: 0x1) reg mtvec 0x40000020 mtvec (/32): 0x40000020 reg pc pc (/32): 0x00000000
reg pc 0x40000000 pc (/32): 0x40000000
step;reg pc pc (/32): 0x40000004
step;reg pc pc (/32): 0x40000008
step;reg pc\ register pc\ not found in current target
step;reg pc\ register pc\ not found in current target
step;reg pc pc (/32): 0x00000008
step;reg pc pc (/32): 0x00000000
halt
Hi, Why is the default address width of Mbus 16 bits? Which line of scala code can modify the address width of MBus.
It probably all depend how you memory mapped it, If you mapped only 64KB there, the address size will reflect that.
I have another question, which is that after loading the program with jtag and GDB, there will be abnormal PC register jumps. May I ask what is the reason for this:
Yes i also noticed that, it seems like it is more an issue where openocd show a wrong value. Not sure if that is a openocd issue or a hardware issue, or a mix of both.
But overall, the debug works, just that the first PC showing up is messed up.
My current difficulty is generating a Nax SOC using Litex and successfully loading Drysone.elf in SRAM,
I never tried that, i was always using heavy buildroot apps loaded via the litex bootloader or via jtag directly
I never tried that, i was always using heavy buildroot apps loaded via the litex bootloader or via jtag directly
I also did the same. I set the SRAM address to 0x40000000 with a size of 64K using LITEX, and then loaded it into Draystone.elf through opebocd. After reading the memory, I found that the program was loaded successfully, but it did not produce any results. I waited for several hours but could not produce any results and remained stuck on this interface:
Reading symbols from .\Dhrystone_DDR_Payload.elf...
>(gdb) target remote:3333
Remote debugging using :3333
0x00000000 in ?? ()
>(gdb) load
Loading section ._vector, size 0x118 lma 0x40000000
Loading section .memory, size 0x1702 lma 0x40000118
Loading section .text.startup, size 0x50c lma 0x4000181a
Loading section .rodata, size 0x4b4 lma 0x40001d28
Loading section .data, size 0xc lma 0x400021dc
Start address 0x40000000, load size 8678
Transfer rate: 33 KB/sec, 1735 bytes/write.
>(gdb) c
Continuing.
>///using ctrol + c
Program received signal SIGINT, Interrupt.
0x00000000 in ?? ()
(gdb)
And even with the correct configuration, the serial port did not receive any information. Because I was able to use this set of operations to run results on vexricv before, I continued to use the same steps on naxriscv, but it seems to not work. 1. Connect JTAG. 2. Run openocd, load elf program, and then execute the program. Could you please check if there is an issue with my operation or configuration? Thank you!!
I would say, as a first step, do not use GDB, but instead the openocd telnet, as it is closer to the target.
Program received signal SIGINT, Interrupt. 0x00000000 in ?? ()
Mean (likely) there was a exception happening and the CPU went to its trap vector which was set to 0
So, first load the binary via openocd, then check that the memory is readable via jtag aswell and match what you loaded, then maybe go step by step.
Small step by small step, validating things gently
I would say, as a first step, do not use GDB, but instead the openocd telnet, as it is closer to the target.
You're right, my first step was to use a powershell to successfully monitor JTAG and naxcore. Then, I opened another powershell window and ran GDB, which is the same interface as before.
So, first load the binary via openocd, then check that the memory is readable via jtag aswell and match what you loaded, then maybe go step by step.
After loading GDB, I was able to use commands such as mdw and mww to read and write memory, and I could also use the mdw command to see that the elf file was successfully loaded.then Iused command C
Mean (likely) there was a exception happening and the CPU went to its trap vector which was set to 0
Will this issue be related to the cfg file of OpenOCD? Because my cfg file was fine tuned based on Vex and ran successfully. Drysone.elf was also ported and used as is after Vex ran it.
If you used the Vex with the official RISC-V jtag debug (and not the custom one) things should work the same. But maybe the memory mapping of the peripherals isn't the same.
You need to use the step by step in openocd step; reg pc
And see where it goes south
If you used the Vex with the official RISC-V jtag debug (and not the custom one) things should work the same. But maybe the memory mapping of the peripherals isn't the same.
The starting address and size for loading the elf file are consistent. I remember that as long as this address is consistent, SRAM can successfully load, read, write, and run it normally, just like VEX. Everything else is set by default for LITEX.
You need to use the step by step in openocd step; reg pc
Yes, heroes have similar opinions, and I did the same. Then I encountered the issue of abnormal redirection that I previously asked you about on PC.
And see where it goes south
Just like the one I asked you about PC registers before
Can you show me some openocd logs of the binary being loaded then multiple step; reg pc ?
Can you show me some openocd logs of the binary being loaded then multiple step; reg pc ?
* the elf file of the file being loaded
ok,fine,I have made it. openocd_rv402_dbg.log
Debug: 11209 65144 riscv.c:3982 register_get(): [riscv.cpu] read 0x40000008 from pc (valid=0) Debug: 11363 66040 command.c:201 script_debug(): command - reg pc Debug: 11370 66040 riscv-013.c:4074 riscv013_get_register(): [riscv.cpu] reading register pc Debug: 11402 66043 riscv-013.c:1496 register_read_direct(): {0} dpc = 0x0 Debug: 11403 66043 riscv-013.c:4083 riscv013_get_register(): [0] read PC from DPC: 0x0 Debug: 11404 66043 riscv.c:3632 riscv_get_register(): [riscv.cpu] pc: 0 Debug: 11405 66043 riscv.c:3982 register_get(): [riscv.cpu] read 0x00000000 from pc (valid=0)
What is the binary being loaded ? especialy 0x40000008
What is the binary being loaded ? especialy 0x40000008
mdw 0x40000000 256
0x40000000: 40000137 08a10113 000100e7 00000001 00000000 00000000 00000000 00000000 0x40000020: fe112e23 fe512c23 fe612a23 fe712823 fea12623 feb12423 fec12223 fed12023 it is the part of elf file to mif file:
WIDTH=32;
DEPTH=4096;
ADDRESS_RADIX=DEC;
DATA_RADIX=HEX;
CONTENT BEGIN
0 : 40000137;
1 : 8a10113;
2 : 100e7;
3 : 1;
4 : 0;
5 : 0;
6 : 0;
7 : 0;
8 : fe112e23;
9 : fe512c23;
10 : fe612a23;
11 : fe712823;
12 : fea12623;
13 : feb12423;
14 : fec12223;
15 : fed12023;
16 : fce12e23;
17 : fcf12c23;
18 : fd012a23;
19 : fd112823;
20 : fdc12623;
Hmmm, so when it reach 0x40000008,check for PC, mstatus, mcause, mepc then do a step, and then check them again
Hmmm, so when it reach 0x40000008,check for PC, mstatus, mcause, mepc then do a step, and then check them again
Hi, Can you provide more details? Is there a problem with the binary file being loaded?
Hmmm, so when it reach 0x40000008,check for PC, mstatus, mcause, mepc then do a step, and then check them again
Is this the reason why the program is stuck?
Hmmm, so when it reach 0x40000008,check for PC, mstatus, mcause, mepc then do a step, and then check them again
> reg pc
pc (/32): 0x40000000
> step;reg pc
pc (/32): 0x40000004
> step;reg pc
pc (/32): 0x40000008
> reg mcause
mcause (/32): 0x00000000
> reg mstayus
register mstayus not found in current target
> reg mstatus
mstatus (/32): 0x00000000
> reg mepc
mepc (/32): 0x00000000
> step;reg pc
pc (/32): 0x00000000
> reg mepc
mepc (/32): 0x00000000
> reg mstatus
mstatus (/32): 0x00000000
> reg mcause
mcause (/32): 0x00000000
Ahh that is weird XD Can you send me your elf file + the whole litex command you use + openocd scripts ? i will try to reproduce in simulation
Is this the reason why the program is stuck?
No good reasons i would say
Ahh that is weird XD Can you send me your elf file + the whole litex command you use + openocd scripts ? i will try to reproduce in simulation
ok,fine,you can see:
https://github.com/duanjiulon/NAX_file/tree/main
I have sent you all the necessary files. You should only need to open openocd and gdb during simulation and load the files. Additionally, the command for lite is:
python3.7 /home/jlduan/work1/litex-boards/litex_boards/targets/sipeed_tang_primer.py --cpu-type=naxriscv --bus-standard axi-lite --sys-clk-freq 50000000 --with-jtag-tap --with-rvc --build --cpu-variant=standard --integrated-sram-size 65536
I don't know how to correctly modify the starting address of SRAM through the command line, so I changed the directory to soc_core.py. You can refer to readme.md for details. You can directly use the V file that I uploaded and generated through the command for simulation or debugging. Thank you for your guidance,Dear Dolu
Hmm it may be related to the openocd version showing some NaxRiscv bug.
I just tried in simulation via :
litex_sim --cpu-type=naxriscv --bus-standard axi-lite --with-jtag-tap --with-rvc --update-repo no --with-jtagremote --integrated-sram-size 65536
openocd -f vexiiriscv_sim.tcl # https://github.com/SpinalHDL/VexiiRiscv/blob/dev/src/main/tcl/openocd/vexiiriscv_sim.tcl
And with very very upstream openocd riscv, it doesn't go well. While with openocd 0.11.0, things seems fine, i can load the binary, step by step in it, seems ok.
What version of openocd do you have ? Can you try with openocd 0.11.0 (official one from openocd people themself, not a custom one)
What version of openocd do you have ? Can you try with openocd 0.11.0 (official one from openocd people themself, not a custom one)
Hi, according to your instructions, I git cloned the source code of openocd0.11.0 and compiled the installation environment to obtain the. exe file. After that, I loaded the file again for debugging, and the result was still the same as before. In addition, I also used version 12.0 and riscv specific versions, and the results were similar to the previous ones, indicating that some could jump out of 0x0000000 and repeatedly jump horizontally between 0x0000000 and 0x0000002, while version 11.0 was directly stuck in the memory of 0x0000000.**
Hi, according to your instructions, I git cloned the source code of openocd0.11.0 and compiled the installation environment to obtain the. exe file. After that, I loaded the file again for debugging, and the result was still the same as before. In addition, I also used version 12.0 and riscv specific versions, and the results were similar to the previous ones, indicating that some could jump out of 0x0000000 and repeatedly jump horizontally between 0x0000000 and 0x0000002, while version 11.0 was directly stuck in the memory of 0x0000000.**
On hardware right ? Wasn't tested in sim via litex_sim right ?
I'm now testing stuff on sim so far so good, but that is while testing on the bios itself, One question i have is how much of the SoC works (without considering the jtag debug) ? Can you run some binaries on 0x40000000 fine ?
On hardware right ? Wasn't tested in sim via litex_sim right ?
Unfortunately, I created a binary file that outputs hello and ran it in the executable unit of ROM. However, I was running step; When using Reg PC, only three steps can be run. Later, according to your instructions, I wanted to verify through simulation,
__ _ __ _ __
/ / (_) /____ | |/_/
/ /__/ / __/ -_)> <
/____/_/\__/\__/_/|_|
Build your hardware, easily!
(c) Copyright 2012-2023 Enjoy-Digital
(c) Copyright 2007-2015 M-Labs
BIOS built on Apr 16 2024 15:13:45
BIOS CRC passed (da917189)
LiteX git sha1: --------
--=============== SoC ==================--
CPU: NaxRiscv @ 1MHz
BUS: AXI-LITE 32-bit @ 4GiB
CSR: 32-bit data
ROM: 128.0KiB
SRAM: 64.0KiB
--============== Boot ==================--
Booting from serial...
Press Q or ESC to abort boot completely.
sL5DdSMmkekro
Timeout
No boot medium found
--============= Console ================--
May I ask how to boot through a serial port during simulation? Can you give an example? Isn't it impossible to connect hardware during simulation?
May I ask how to boot through a serial port during simulation? Can you give an example?
I don't know much about litex for this kind of things.
Isn't it impossible to connect hardware during simulation?
What hardware ? To connect jtag to a simulation, we use a virtualized jtag over TCP
What hardware ? To connect jtag to a simulation, we use a virtualized jtag over TCP
Hi,dear Dolu,so generally speaking, do you debug through litex_sim and remote openocd? Because I often use on board debugging, I am not very clear about the specific steps of remote debugging, especially through TCP. Can you explain it in more detail? Or could you please provide detailed steps so that I can simulate and try it out tomorrow.
The litexGen isn't a self contained runner, it need some argument and anditional scala file to generate. For instance, https://github.com/litex-hub/pythondata-cpu-naxriscv/blob/master/pythondata_cpu_naxriscv/verilog/configs/gen.scala is provided should be provided by the --scala-file argument.
Originally posted by @Dolu1990 in https://github.com/SpinalHDL/NaxRiscv/issues/15#issuecomment-1296149708