^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1) /* SPDX-License-Identifier: GPL-2.0-or-later */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3) L2CR functions
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4) Copyright © 1997-1998 by PowerLogix R & D, Inc.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8) Thur, Dec. 12, 1998.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 9) - First public release, contributed by PowerLogix.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 10) ***********
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 11) Sat, Aug. 7, 1999.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 12) - Terry: Made sure code disabled interrupts before running. (Previously
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 13) it was assumed interrupts were already disabled).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 14) - Terry: Updated for tentative G4 support. 4MB of memory is now flushed
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 15) instead of 2MB. (Prob. only 3 is necessary).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 16) - Terry: Updated for workaround to HID0[DPM] processor bug
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 17) during global invalidates.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 18) ***********
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 19) Thu, July 13, 2000.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 20) - Terry: Added isync to correct for an errata.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 21)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 22) 22 August 2001.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 23) - DanM: Finally added the 7450 patch I've had for the past
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 24) several months. The L2CR is similar, but I'm going
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 25) to assume the user of this functions knows what they
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 26) are doing.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 27)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 28) Author: Terry Greeniaus (tgree@phys.ualberta.ca)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 29) Please e-mail updates to this file to me, thanks!
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 30) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 31) #include <asm/processor.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 32) #include <asm/cputable.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 33) #include <asm/ppc_asm.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 34) #include <asm/cache.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 35) #include <asm/page.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 36) #include <asm/feature-fixups.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 37)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 38) /* Usage:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 39)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 40) When setting the L2CR register, you must do a few special
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 41) things. If you are enabling the cache, you must perform a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 42) global invalidate. If you are disabling the cache, you must
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 43) flush the cache contents first. This routine takes care of
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 44) doing these things. When first enabling the cache, make sure
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 45) you pass in the L2CR you want, as well as passing in the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 46) global invalidate bit set. A global invalidate will only be
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 47) performed if the L2I bit is set in applyThis. When enabling
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 48) the cache, you should also set the L2E bit in applyThis. If
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 49) you want to modify the L2CR contents after the cache has been
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 50) enabled, the recommended procedure is to first call
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 51) __setL2CR(0) to disable the cache and then call it again with
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 52) the new values for L2CR. Examples:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 53)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 54) _setL2CR(0) - disables the cache
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 55) _setL2CR(0xB3A04000) - enables my G3 upgrade card:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 56) - L2E set to turn on the cache
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 57) - L2SIZ set to 1MB
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 58) - L2CLK set to 1:1
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 59) - L2RAM set to pipelined synchronous late-write
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 60) - L2I set to perform a global invalidation
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 61) - L2OH set to 0.5 nS
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 62) - L2DF set because this upgrade card
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 63) requires it
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 64)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 65) A similar call should work for your card. You need to know
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 66) the correct setting for your card and then place them in the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 67) fields I have outlined above. Other fields support optional
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 68) features, such as L2DO which caches only data, or L2TS which
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 69) causes cache pushes from the L1 cache to go to the L2 cache
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 70) instead of to main memory.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 71)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 72) IMPORTANT:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 73) Starting with the 7450, the bits in this register have moved
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 74) or behave differently. The Enable, Parity Enable, Size,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 75) and L2 Invalidate are the only bits that have not moved.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 76) The size is read-only for these processors with internal L2
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 77) cache, and the invalidate is a control as well as status.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 78) -- Dan
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 79)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 80) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 81) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 82) * Summary: this procedure ignores the L2I bit in the value passed in,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 83) * flushes the cache if it was already enabled, always invalidates the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 84) * cache, then enables the cache if the L2E bit is set in the value
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 85) * passed in.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 86) * -- paulus.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 87) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 88) _GLOBAL(_set_L2CR)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 89) /* Make sure this is a 750 or 7400 chip */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 90) BEGIN_FTR_SECTION
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 91) li r3,-1
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 92) blr
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 93) END_FTR_SECTION_IFCLR(CPU_FTR_L2CR)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 94)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 95) mflr r9
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 96)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 97) /* Stop DST streams */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 98) BEGIN_FTR_SECTION
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 99) DSSALL
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 100) sync
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 101) END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 102)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 103) /* Turn off interrupts and data relocation. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 104) mfmsr r7 /* Save MSR in r7 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 105) rlwinm r4,r7,0,17,15
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 106) rlwinm r4,r4,0,28,26 /* Turn off DR bit */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 107) sync
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 108) mtmsr r4
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 109) isync
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 110)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 111) /* Before we perform the global invalidation, we must disable dynamic
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 112) * power management via HID0[DPM] to work around a processor bug where
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 113) * DPM can possibly interfere with the state machine in the processor
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 114) * that invalidates the L2 cache tags.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 115) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 116) mfspr r8,SPRN_HID0 /* Save HID0 in r8 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 117) rlwinm r4,r8,0,12,10 /* Turn off HID0[DPM] */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 118) sync
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 119) mtspr SPRN_HID0,r4 /* Disable DPM */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 120) sync
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 121)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 122) /* Get the current enable bit of the L2CR into r4 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 123) mfspr r4,SPRN_L2CR
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 124)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 125) /* Tweak some bits */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 126) rlwinm r5,r3,0,0,0 /* r5 contains the new enable bit */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 127) rlwinm r3,r3,0,11,9 /* Turn off the invalidate bit */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 128) rlwinm r3,r3,0,1,31 /* Turn off the enable bit */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 129)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 130) /* Check to see if we need to flush */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 131) rlwinm. r4,r4,0,0,0
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 132) beq 2f
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 133)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 134) /* Flush the cache. First, read the first 4MB of memory (physical) to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 135) * put new data in the cache. (Actually we only need
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 136) * the size of the L2 cache plus the size of the L1 cache, but 4MB will
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 137) * cover everything just to be safe).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 138) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 139)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 140) /**** Might be a good idea to set L2DO here - to prevent instructions
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 141) from getting into the cache. But since we invalidate
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 142) the next time we enable the cache it doesn't really matter.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 143) Don't do this unless you accommodate all processor variations.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 144) The bit moved on the 7450.....
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 145) ****/
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 146)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 147) BEGIN_FTR_SECTION
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 148) /* Disable L2 prefetch on some 745x and try to ensure
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 149) * L2 prefetch engines are idle. As explained by errata
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 150) * text, we can't be sure they are, we just hope very hard
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 151) * that well be enough (sic !). At least I noticed Apple
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 152) * doesn't even bother doing the dcbf's here...
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 153) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 154) mfspr r4,SPRN_MSSCR0
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 155) rlwinm r4,r4,0,0,29
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 156) sync
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 157) mtspr SPRN_MSSCR0,r4
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 158) sync
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 159) isync
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 160) lis r4,KERNELBASE@h
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 161) dcbf 0,r4
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 162) dcbf 0,r4
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 163) dcbf 0,r4
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 164) dcbf 0,r4
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 165) END_FTR_SECTION_IFSET(CPU_FTR_SPEC7450)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 166)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 167) /* TODO: use HW flush assist when available */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 168)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 169) lis r4,0x0002
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 170) mtctr r4
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 171) li r4,0
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 172) 1:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 173) lwzx r0,0,r4
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 174) addi r4,r4,32 /* Go to start of next cache line */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 175) bdnz 1b
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 176) isync
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 177)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 178) /* Now, flush the first 4MB of memory */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 179) lis r4,0x0002
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 180) mtctr r4
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 181) li r4,0
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 182) sync
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 183) 1:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 184) dcbf 0,r4
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 185) addi r4,r4,32 /* Go to start of next cache line */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 186) bdnz 1b
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 187)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 188) 2:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 189) /* Set up the L2CR configuration bits (and switch L2 off) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 190) /* CPU errata: Make sure the mtspr below is already in the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 191) * L1 icache
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 192) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 193) b 20f
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 194) .balign L1_CACHE_BYTES
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 195) 22:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 196) sync
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 197) mtspr SPRN_L2CR,r3
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 198) sync
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 199) b 23f
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 200) 20:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 201) b 21f
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 202) 21: sync
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 203) isync
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 204) b 22b
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 205)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 206) 23:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 207) /* Perform a global invalidation */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 208) oris r3,r3,0x0020
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 209) sync
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 210) mtspr SPRN_L2CR,r3
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 211) sync
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 212) isync /* For errata */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 213)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 214) BEGIN_FTR_SECTION
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 215) /* On the 7450, we wait for the L2I bit to clear......
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 216) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 217) 10: mfspr r3,SPRN_L2CR
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 218) andis. r4,r3,0x0020
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 219) bne 10b
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 220) b 11f
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 221) END_FTR_SECTION_IFSET(CPU_FTR_SPEC7450)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 222)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 223) /* Wait for the invalidation to complete */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 224) 3: mfspr r3,SPRN_L2CR
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 225) rlwinm. r4,r3,0,31,31
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 226) bne 3b
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 227)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 228) 11: rlwinm r3,r3,0,11,9 /* Turn off the L2I bit */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 229) sync
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 230) mtspr SPRN_L2CR,r3
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 231) sync
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 232)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 233) /* See if we need to enable the cache */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 234) cmplwi r5,0
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 235) beq 4f
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 236)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 237) /* Enable the cache */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 238) oris r3,r3,0x8000
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 239) mtspr SPRN_L2CR,r3
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 240) sync
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 241)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 242) /* Enable L2 HW prefetch on 744x/745x */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 243) BEGIN_FTR_SECTION
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 244) mfspr r3,SPRN_MSSCR0
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 245) ori r3,r3,3
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 246) sync
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 247) mtspr SPRN_MSSCR0,r3
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 248) sync
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 249) isync
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 250) END_FTR_SECTION_IFSET(CPU_FTR_SPEC7450)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 251) 4:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 252)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 253) /* Restore HID0[DPM] to whatever it was before */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 254) sync
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 255) mtspr 1008,r8
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 256) sync
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 257)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 258) /* Restore MSR (restores EE and DR bits to original state) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 259) mtmsr r7
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 260) isync
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 261)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 262) mtlr r9
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 263) blr
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 264)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 265) _GLOBAL(_get_L2CR)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 266) /* Return the L2CR contents */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 267) li r3,0
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 268) BEGIN_FTR_SECTION
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 269) mfspr r3,SPRN_L2CR
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 270) END_FTR_SECTION_IFSET(CPU_FTR_L2CR)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 271) blr
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 272)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 273)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 274) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 275) * Here is a similar routine for dealing with the L3 cache
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 276) * on the 745x family of chips
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 277) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 278)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 279) _GLOBAL(_set_L3CR)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 280) /* Make sure this is a 745x chip */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 281) BEGIN_FTR_SECTION
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 282) li r3,-1
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 283) blr
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 284) END_FTR_SECTION_IFCLR(CPU_FTR_L3CR)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 285)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 286) /* Turn off interrupts and data relocation. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 287) mfmsr r7 /* Save MSR in r7 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 288) rlwinm r4,r7,0,17,15
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 289) rlwinm r4,r4,0,28,26 /* Turn off DR bit */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 290) sync
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 291) mtmsr r4
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 292) isync
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 293)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 294) /* Stop DST streams */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 295) DSSALL
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 296) sync
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 297)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 298) /* Get the current enable bit of the L3CR into r4 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 299) mfspr r4,SPRN_L3CR
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 300)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 301) /* Tweak some bits */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 302) rlwinm r5,r3,0,0,0 /* r5 contains the new enable bit */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 303) rlwinm r3,r3,0,22,20 /* Turn off the invalidate bit */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 304) rlwinm r3,r3,0,2,31 /* Turn off the enable & PE bits */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 305) rlwinm r3,r3,0,5,3 /* Turn off the clken bit */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 306) /* Check to see if we need to flush */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 307) rlwinm. r4,r4,0,0,0
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 308) beq 2f
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 309)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 310) /* Flush the cache.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 311) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 312)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 313) /* TODO: use HW flush assist */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 314)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 315) lis r4,0x0008
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 316) mtctr r4
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 317) li r4,0
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 318) 1:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 319) lwzx r0,0,r4
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 320) dcbf 0,r4
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 321) addi r4,r4,32 /* Go to start of next cache line */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 322) bdnz 1b
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 323)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 324) 2:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 325) /* Set up the L3CR configuration bits (and switch L3 off) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 326) sync
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 327) mtspr SPRN_L3CR,r3
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 328) sync
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 329)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 330) oris r3,r3,L3CR_L3RES@h /* Set reserved bit 5 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 331) mtspr SPRN_L3CR,r3
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 332) sync
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 333) oris r3,r3,L3CR_L3CLKEN@h /* Set clken */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 334) mtspr SPRN_L3CR,r3
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 335) sync
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 336)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 337) /* Wait for stabilize */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 338) li r0,256
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 339) mtctr r0
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 340) 1: bdnz 1b
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 341)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 342) /* Perform a global invalidation */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 343) ori r3,r3,0x0400
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 344) sync
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 345) mtspr SPRN_L3CR,r3
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 346) sync
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 347) isync
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 348)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 349) /* We wait for the L3I bit to clear...... */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 350) 10: mfspr r3,SPRN_L3CR
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 351) andi. r4,r3,0x0400
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 352) bne 10b
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 353)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 354) /* Clear CLKEN */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 355) rlwinm r3,r3,0,5,3 /* Turn off the clken bit */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 356) mtspr SPRN_L3CR,r3
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 357) sync
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 358)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 359) /* Wait for stabilize */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 360) li r0,256
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 361) mtctr r0
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 362) 1: bdnz 1b
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 363)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 364) /* See if we need to enable the cache */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 365) cmplwi r5,0
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 366) beq 4f
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 367)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 368) /* Enable the cache */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 369) oris r3,r3,(L3CR_L3E | L3CR_L3CLKEN)@h
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 370) mtspr SPRN_L3CR,r3
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 371) sync
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 372)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 373) /* Wait for stabilize */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 374) li r0,256
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 375) mtctr r0
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 376) 1: bdnz 1b
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 377)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 378) /* Restore MSR (restores EE and DR bits to original state) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 379) 4:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 380) mtmsr r7
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 381) isync
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 382) blr
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 383)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 384) _GLOBAL(_get_L3CR)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 385) /* Return the L3CR contents */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 386) li r3,0
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 387) BEGIN_FTR_SECTION
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 388) mfspr r3,SPRN_L3CR
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 389) END_FTR_SECTION_IFSET(CPU_FTR_L3CR)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 390) blr
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 391)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 392) /* --- End of PowerLogix code ---
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 393) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 394)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 395)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 396) /* flush_disable_L1() - Flush and disable L1 cache
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 397) *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 398) * clobbers r0, r3, ctr, cr0
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 399) * Must be called with interrupts disabled and MMU enabled.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 400) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 401) _GLOBAL(__flush_disable_L1)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 402) /* Stop pending alitvec streams and memory accesses */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 403) BEGIN_FTR_SECTION
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 404) DSSALL
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 405) END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 406) sync
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 407)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 408) /* Load counter to 0x4000 cache lines (512k) and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 409) * load cache with datas
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 410) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 411) li r3,0x4000 /* 512kB / 32B */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 412) mtctr r3
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 413) lis r3,KERNELBASE@h
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 414) 1:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 415) lwz r0,0(r3)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 416) addi r3,r3,0x0020 /* Go to start of next cache line */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 417) bdnz 1b
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 418) isync
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 419) sync
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 420)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 421) /* Now flush those cache lines */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 422) li r3,0x4000 /* 512kB / 32B */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 423) mtctr r3
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 424) lis r3,KERNELBASE@h
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 425) 1:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 426) dcbf 0,r3
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 427) addi r3,r3,0x0020 /* Go to start of next cache line */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 428) bdnz 1b
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 429) sync
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 430)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 431) /* We can now disable the L1 cache (HID0:DCE, HID0:ICE) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 432) mfspr r3,SPRN_HID0
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 433) rlwinm r3,r3,0,18,15
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 434) mtspr SPRN_HID0,r3
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 435) sync
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 436) isync
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 437) blr
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 438)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 439) /* inval_enable_L1 - Invalidate and enable L1 cache
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 440) *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 441) * Assumes L1 is already disabled and MSR:EE is off
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 442) *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 443) * clobbers r3
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 444) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 445) _GLOBAL(__inval_enable_L1)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 446) /* Enable and then Flash inval the instruction & data cache */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 447) mfspr r3,SPRN_HID0
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 448) ori r3,r3, HID0_ICE|HID0_ICFI|HID0_DCE|HID0_DCI
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 449) sync
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 450) isync
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 451) mtspr SPRN_HID0,r3
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 452) xori r3,r3, HID0_ICFI|HID0_DCI
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 453) mtspr SPRN_HID0,r3
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 454) sync
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 455)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 456) blr
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 457) _ASM_NOKPROBE_SYMBOL(__inval_enable_L1)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 458)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 459)