bcc5f81f | 18-Apr-2025 |
Zhaoyang You <[email protected]> |
fix(csr): fix trap handle bundle format (#4579) |
c01e75b5 | 16-Apr-2025 |
Ziyue Zhang <[email protected]> |
fix(exceptionGen): clear isEnqExcp when older or curr wb exception coming (#4570) |
26814fb3 | 16-Apr-2025 |
HuSipeng <[email protected]> |
feat(Ftq): split Ftq meta SRAM into smaller size (#4569)
split Ftq meta SRAM into smaller size: (64 × 160) × 2 -> (64 × 80) × 4 |
e5325730 | 15-Apr-2025 |
cz4e <[email protected]> |
fix(DFT): fix `DFT` cgen connection (#4565) |
a25f1ac9 | 15-Apr-2025 |
Guanghui Cheng <[email protected]> |
fix(trace): fix parameters of trace (#4561) |
011d262c | 15-Apr-2025 |
Zhaoyang You <[email protected]> |
feat(PMA, CSR): support PMA CSR configurable (#4233) |
4a02bbda | 15-Apr-2025 |
Anzo <[email protected]> |
fix(LSU): misalign writeback aligned raw rollback (#4476)
By convention, we need to make `rollback` and `writeback` happen at the same time, and not make `writeback` earlier than `rollback`.
Curren
fix(LSU): misalign writeback aligned raw rollback (#4476)
By convention, we need to make `rollback` and `writeback` happen at the same time, and not make `writeback` earlier than `rollback`.
Currently, the `rollback` generated by raw occurs at `s4`. A normal store would take an extra N beats after the end of s3 (based on the number of RAWQueue entries, which is now 1 beat), which is equivalent to `writeback` at `s4` And misaligned would `writeback` at `s2`, then `writeback` after switching to `s_wb` state, which is equivalent to `writeback` at `s3`
---
This pr adjusts the misaligned `writeback` logic to align with the `StoreUnit`. At the same time, it unified the way to calculate the number of beats.
show more ...
|
3933ec0c | 15-Apr-2025 |
Zhaoyang You <[email protected]> |
fix(vstopi): remove SEI from Candidate 4 (#4533)
* if hvictl.VTI = 0: * the highest-priority pending-and-enabled major interrupt indicated * by vsip and vsie other than a supervisor external interru
fix(vstopi): remove SEI from Candidate 4 (#4533)
* if hvictl.VTI = 0: * the highest-priority pending-and-enabled major interrupt indicated * by vsip and vsie other than a supervisor external interrupt(code 9), * using the priority numbers assigned by hviprio1 and hviprio2. * * A hypervisor can choose to employ registers hviprio1 and hviprio2 * when emulating the (virtual) supervisor-level iprio array accessed * indirectly through siselect and sireg (really vsiselect and vsireg) * for a virtual hart. For interrupts not in the subset supported by * hviprio1 and hviprio2, the priority number bytes in the emulated * iprio array can be read-only zeros.
show more ...
|
814aa9ec | 15-Apr-2025 |
yulightenyu <[email protected]> |
fix: add low power related logic (#4554) |
c38ad5d1 | 15-Apr-2025 |
Guanghui Cheng <[email protected]> |
fix(CSR): remove useless logic of `mIRVec` (#4553) |
69b78670 | 14-Apr-2025 |
Tang Haojin <[email protected]> |
feat: add more configuration (#4532) |
30f35717 | 14-Apr-2025 |
cz4e <[email protected]> |
refactor(DFT): refactor `DFT` IO (#4530) |
9feb8e87 | 14-Apr-2025 |
Haoyuan Feng <[email protected]> |
fix(L2TlbPrefetch): fix flush condition of L2 TLB Prefetch (#4541)
All components within the L2 TLB should use the same flush condition to avoid potential issues. |
e5ff9bcb | 14-Apr-2025 |
Haoyuan Feng <[email protected]> |
fix(PTW): fix exception gen when both af and (pf | gpf) occur (#4540)
When `pte_valid` is true and a page fault or guest page fault occurs, the original design only treated `ppn_af` as invalid (with
fix(PTW): fix exception gen when both af and (pf | gpf) occur (#4540)
When `pte_valid` is true and a page fault or guest page fault occurs, the original design only treated `ppn_af` as invalid (without checking whether the higher bits of ppn are zero). However, in this case, the PMP check would still be performed, potentially raising the `accessFault` signal.
This commit fixes the bug by ensuring that if a PMP check fails, only `accessFault` is raised, and pf or gpf will not be incorrectly asserted. Therefore, when either pf or gpf is valid, any `accessFault` resulting from PMP should be ignored.
show more ...
|
c31be712 | 14-Apr-2025 |
Haoyuan Feng <[email protected]> |
fix(PTWCache): hfence_gvma should ignore g bit (#4539)
In "RISC-V Instruction Set Manual: Volume II: Privileged Architecture": The G bit in all G-stage PTEs is reserved for future standard use. Unti
fix(PTWCache): hfence_gvma should ignore g bit (#4539)
In "RISC-V Instruction Set Manual: Volume II: Privileged Architecture": The G bit in all G-stage PTEs is reserved for future standard use. Until its use is defined by a standard extension, it should be cleared by software for forward compatibility, and must be ignored by hardware.
Co-authored-by: SpecialWeeks <[email protected]>
show more ...
|
35bb7796 | 14-Apr-2025 |
Anzo <[email protected]> |
fix(LSU): fix exception for misalign access to `nc` space (#4526)
For misaligned accesses, say if the access after the split goes to `nc` space, then a misaligned exception should also be generated.
fix(LSU): fix exception for misalign access to `nc` space (#4526)
For misaligned accesses, say if the access after the split goes to `nc` space, then a misaligned exception should also be generated.
Co-authored-by: Yanqin Li <[email protected]>
show more ...
|
e7412eb4 | 14-Apr-2025 |
Haoyuan Feng <[email protected]> |
fix(LLPTW): each LLPTW entry should use its own s2xlate (#4510)
In the previous design, the input s2xlate signal was directly used to determine whether to virtualize, but the input signal changed to
fix(LLPTW): each LLPTW entry should use its own s2xlate (#4510)
In the previous design, the input s2xlate signal was directly used to determine whether to virtualize, but the input signal changed to the default value 0 due to timing problems, resulting in the use of the wrong PBMTE.
In fact, LLPTW can handle both virtualized and non-virtualized requests simultaneously. This information is stored in entries(i).req_info.s2xlate. By using this signal, we can distinguish between PBMTEs under different virtualization modes. This commit fixes the bug.
Co-authored-by: SpecialWeeks <[email protected]>
show more ...
|
05cc6da9 | 14-Apr-2025 |
Yanqin Li <[email protected]> |
fix(prefetch): fix control signals of l1 prefetchers (#4534) |
667758b3 | 13-Apr-2025 |
Haoyuan Feng <[email protected]> |
Revert "fix(TLB): should always send onlyS1 req when req_need_gpa (#4… (#4551)
…513)"
This reverts commit 6d07e62cded1f9718c229f7c38d297fed2c95cb8.
At the same time, this commit also fixes a bug w
Revert "fix(TLB): should always send onlyS1 req when req_need_gpa (#4… (#4551)
…513)"
This reverts commit 6d07e62cded1f9718c229f7c38d297fed2c95cb8.
At the same time, this commit also fixes a bug where a onlyS1 request was issued when `req_need_gpa` was active. In fact, even when `req_need_gpa` is active, it is necessary to determine whether to issue allStage, onlyS1, or onlyS2 based on the values of the vsatp or hgatp registers. The previous approach in https://github.com/OpenXiangShan/XiangShan/pull/4513 was incorrect, and the details are as follows:
The process of two-stage address translation:
vaddr -> VS-L2 -> G-L2 -> G-L1 -> G-L0 (G-Stage for VS-Stage) -> VS-L1 -> G-L2 -> G-L1 -> G-L0 (G-Stage for VS-Stage) -> VS-L0 -> G-L2 -> G-L1 -> G-L0 (G-Stage for VS-Stage) -> gpaddr -> G-L1 -> G-L0 -> paddr (last G-Stage)
When a page fault occurs at "G-Stage for VS-Stage" in the diagram, the corresponding VS-Stage result before the arrow is the gpaddr value to be written to the *tval register.
However, for example, the required gpaddr comes from VS-L0. Although the first G-Stage query result for VS-L0 is not needed, it is clear that two G-Stage requests are required before this, after VS-L2 and VS-L1, in order to obtain the correct VS-L0 memory access address. If the L1 TLB directly issues an onlyS1 request, then any G-Stage requests will be ignored, which is unreasonable.
show more ...
|
724e3eb4 | 10-Apr-2025 |
Yanqin Li <[email protected]> |
fix(StoreQueue): keep readPtr until slave ack when outstanding (#4531) |
c9c4960f | 10-Apr-2025 |
Ziyue Zhang <[email protected]> |
fix(decode): block the vector decode until vsetvl has committed (#4535) |
8795ffc0 | 10-Apr-2025 |
Sam Castleberry <[email protected]> |
feat: move frontend SRAM read-write conflict handling to SRAMTemplate (#4445)
Hello, this change set is to remove the SRAM read-write conflict handling logic in the frontend, after OpenXiangShan/Uti
feat: move frontend SRAM read-write conflict handling to SRAMTemplate (#4445)
Hello, this change set is to remove the SRAM read-write conflict handling logic in the frontend, after OpenXiangShan/Utility#110 has been merged, which adds this logic to the SRAMTemplate. See that pull request and also #4242 for more context.
After this change, I see microbench IPC change 1.397 -> 1.413 and coremark IPC change 2.136 -> 2.147. The branch mispredictions also decreased slightly in both.
This probably cannot be merged automatically, since the utility submodule should point to the new revision after merging instead of the revision in my branch.
Thanks, Sam
show more ...
|
4ec1f462 | 09-Apr-2025 |
cz4e <[email protected]> |
timing(StoreMisalignBuffer): fix misalign buffer enq timing (#4493)
* a misalign store will enqueue misalign buffer at s1, and revoke if it needs at s2 |
cfbfe74e | 09-Apr-2025 |
Haoyuan Feng <[email protected]> |
fix(MMU): fix gvpn generate when PTWCache Stage1Hit a napot entry (#4527)
For allStage case, when there is a PageTableCache Stage1Hit, the GVPN is reconstructed within the PTW. However, this reconst
fix(MMU): fix gvpn generate when PTWCache Stage1Hit a napot entry (#4527)
For allStage case, when there is a PageTableCache Stage1Hit, the GVPN is reconstructed within the PTW. However, this reconstruction process did not account for the napot case. This commit fixes the bug.
show more ...
|
8c9da034 | 09-Apr-2025 |
Haoyuan Feng <[email protected]> |
fix(LLPTW): should not check g-stage pf when vs-stage pf occured (#4525)
When vs-stage pagefault occurs, check high bits of gvpn for g-stage is meaningless, and should not report guest page fault. |