AXI4Arbiter object scala code reading

object AXI4Arbiter

{

def apply[T <: Data](policy: TLArbiter.Policy)(sink: IrrevocableIO[T], sources: IrrevocableIO[T]*): Unit = {

if (sources.isEmpty) {

sink.valid := false.B

} else {

returnWinner(policy)(sink, sources:_*)

}

}

def returnWinner[T <: Data](policy: TLArbiter.Policy)(sink: IrrevocableIO[T], sources: IrrevocableIO[T]*) = {

require (!sources.isEmpty)

// The arbiter is irrevocable; when !idle, repeat last request

val idle = RegInit(true.B)

// Who wants access to the sink?

val valids = sources.map(_.valid)

val anyValid = valids.reduce(_ || _)

// Arbitrate amongst the requests

val readys = VecInit(policy(valids.size, Cat(valids.reverse), idle).asBools)

// Which request wins arbitration?

val winner = VecInit((readys zip valids) map { case (r,v) => r&&v })

// Confirm the policy works properly

require (readys.size == valids.size)

// Never two winners

val prefixOR = winner.scanLeft(false.B)(||).init

assert((prefixOR zip winner) map { case (p,w) => !p || !w } reduce {_ && _})

// If there was any request, there is a winner

assert (!anyValid || winner.reduce(||))

// The one-hot source granted access in the previous cycle

val state = RegInit(VecInit.fill(sources.size)(false.B))

val muxState = Mux(idle, winner, state)

state := muxState

// Determine when we go idle

when (anyValid) { idle := false.B }

when (sink.fire) { idle := true.B }

if (sources.size > 1) {

val allowed = Mux(idle, readys, state)

(sources zip allowed) foreach { case (s, r) =>

s.ready := sink.ready && r

}

} else {

sources(0).ready := sink.ready

}

sink.valid := Mux(idle, anyValid, Mux1H(state, valids))

sink.bits :<= Mux1H(muxState, sources.map(_.bits))

muxState

}

}

相关推荐
旋风小飞棍2 天前
如何在sheel中运行spark
大数据·开发语言·scala
rylshe13142 天前
在scala中sparkSQL连接mysql并添加新数据
开发语言·mysql·scala
MZWeiei4 天前
Spark任务调度流程详解
大数据·分布式·spark·scala
бесплатно4 天前
Scala流程控制
开发语言·后端·scala
Bin Watson11 天前
解决 Builroot 系统编译 perl 编译报错问题
开发语言·scala·perl
什么芮.14 天前
大数据应用开发和项目实战(2)
大数据·pytorch·sql·spark·scala
不要天天开心16 天前
Spark-Streaming核心编程:有状态转化操作与DStream输出
scala
欧先生^_^17 天前
Scala语法基础
开发语言·后端·scala
不要天天开心19 天前
大数据利器:Kafka与Spark的深度探索
spark·scala
不要天天开心19 天前
Kafka与Spark-Streaming:大数据处理的黄金搭档
kafka·scala