Mon Nov 30 01:58:49 2009 UTC ()
stablize UP USIII support by blocking interrupts around sp_tlb_flush_pte()
i was seeing stack corruption while taking an interrupt in this function.

get USIII SMP mostly working by implementing the cheetah version of
sparc64_ipi_flush_pte().

SMP support is still not entirely stable.  i can reproducably get a:

	panic: fpusave_lwp ipi didn't

while running build.sh, when an awk process is exiting.  other simple
heavy work loads do not crash for me right now.


(mrg)
diff -r1.297 -r1.298 src/sys/arch/sparc64/sparc64/locore.s

cvs diff -r1.297 -r1.298 src/sys/arch/sparc64/sparc64/locore.s (switch to unified diff)

--- src/sys/arch/sparc64/sparc64/locore.s 2009/11/30 01:45:04 1.297
+++ src/sys/arch/sparc64/sparc64/locore.s 2009/11/30 01:58:49 1.298
@@ -1,1000 +1,1000 @@ @@ -1,1000 +1,1000 @@
1/* $NetBSD: locore.s,v 1.297 2009/11/30 01:45:04 mrg Exp $ */ 1/* $NetBSD: locore.s,v 1.298 2009/11/30 01:58:49 mrg Exp $ */
2 2
3/* 3/*
4 * Copyright (c) 1996-2002 Eduardo Horvath 4 * Copyright (c) 1996-2002 Eduardo Horvath
5 * Copyright (c) 1996 Paul Kranenburg 5 * Copyright (c) 1996 Paul Kranenburg
6 * Copyright (c) 1996 6 * Copyright (c) 1996
7 * The President and Fellows of Harvard College. 7 * The President and Fellows of Harvard College.
8 * All rights reserved. 8 * All rights reserved.
9 * Copyright (c) 1992, 1993 9 * Copyright (c) 1992, 1993
10 * The Regents of the University of California. 10 * The Regents of the University of California.
11 * All rights reserved. 11 * All rights reserved.
12 * 12 *
13 * This software was developed by the Computer Systems Engineering group 13 * This software was developed by the Computer Systems Engineering group
14 * at Lawrence Berkeley Laboratory under DARPA contract BG 91-66 and 14 * at Lawrence Berkeley Laboratory under DARPA contract BG 91-66 and
15 * contributed to Berkeley. 15 * contributed to Berkeley.
16 * 16 *
17 * All advertising materials mentioning features or use of this software 17 * All advertising materials mentioning features or use of this software
18 * must display the following acknowledgement: 18 * must display the following acknowledgement:
19 * This product includes software developed by the University of 19 * This product includes software developed by the University of
20 * California, Lawrence Berkeley Laboratory. 20 * California, Lawrence Berkeley Laboratory.
21 * This product includes software developed by Harvard University. 21 * This product includes software developed by Harvard University.
22 * 22 *
23 * Redistribution and use in source and binary forms, with or without 23 * Redistribution and use in source and binary forms, with or without
24 * modification, are permitted provided that the following conditions 24 * modification, are permitted provided that the following conditions
25 * are met: 25 * are met:
26 * 1. Redistributions of source code must retain the above copyright 26 * 1. Redistributions of source code must retain the above copyright
27 * notice, this list of conditions and the following disclaimer. 27 * notice, this list of conditions and the following disclaimer.
28 * 2. Redistributions in binary form must reproduce the above copyright 28 * 2. Redistributions in binary form must reproduce the above copyright
29 * notice, this list of conditions and the following disclaimer in the 29 * notice, this list of conditions and the following disclaimer in the
30 * documentation and/or other materials provided with the 30 * documentation and/or other materials provided with the
31 * distribution. 31 * distribution.
32 * 3. All advertising materials mentioning features or use of this 32 * 3. All advertising materials mentioning features or use of this
33 * software must display the following acknowledgement: 33 * software must display the following acknowledgement:
34 * This product includes software developed by the University of 34 * This product includes software developed by the University of
35 * California, Berkeley and its contributors. 35 * California, Berkeley and its contributors.
36 * This product includes software developed by Harvard University. 36 * This product includes software developed by Harvard University.
37 * This product includes software developed by Paul Kranenburg. 37 * This product includes software developed by Paul Kranenburg.
38 * 4. Neither the name of the University nor the names of its 38 * 4. Neither the name of the University nor the names of its
39 * contributors may be used to endorse or promote products derived 39 * contributors may be used to endorse or promote products derived
40 * from this software without specific prior written permission. 40 * from this software without specific prior written permission.
41 * 41 *
42 * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' 42 * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS''
43 * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, 43 * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
44 * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A 44 * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
45 * PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR 45 * PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR
46 * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, 46 * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
47 * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT 47 * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
48 * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, 48 * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
49 * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON 49 * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
50 * ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR 50 * ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR
51 * TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF 51 * TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
52 * THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH 52 * THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
53 * DAMAGE. 53 * DAMAGE.
54 * 54 *
55 * @(#)locore.s 8.4 (Berkeley) 12/10/93 55 * @(#)locore.s 8.4 (Berkeley) 12/10/93
56 */ 56 */
57 57
58#ifndef SCHIZO_BUS_SPACE_BROKEN /* Need phys access for USIII so far */ 58#ifndef SCHIZO_BUS_SPACE_BROKEN /* Need phys access for USIII so far */
59#define SPITFIRE /* We don't support Cheetah (USIII) yet */ 59#define SPITFIRE /* We don't support Cheetah (USIII) yet */
60#endif 60#endif
61#undef PARANOID /* Extremely expensive consistency checks */ 61#undef PARANOID /* Extremely expensive consistency checks */
62#undef NO_VCACHE /* Map w/D$ disabled */ 62#undef NO_VCACHE /* Map w/D$ disabled */
63#undef TRAPSTATS /* Count traps */ 63#undef TRAPSTATS /* Count traps */
64#undef TRAPS_USE_IG /* Use Interrupt Globals for all traps */ 64#undef TRAPS_USE_IG /* Use Interrupt Globals for all traps */
65#define HWREF /* Track ref/mod bits in trap handlers */ 65#define HWREF /* Track ref/mod bits in trap handlers */
66#undef DCACHE_BUG /* Flush D$ around ASI_PHYS accesses */ 66#undef DCACHE_BUG /* Flush D$ around ASI_PHYS accesses */
67#undef NO_TSB /* Don't use TSB */ 67#undef NO_TSB /* Don't use TSB */
68#define USE_BLOCK_STORE_LOAD /* enable block load/store ops */ 68#define USE_BLOCK_STORE_LOAD /* enable block load/store ops */
69#define BB_ERRATA_1 /* writes to TICK_CMPR may fail */ 69#define BB_ERRATA_1 /* writes to TICK_CMPR may fail */
70 70
71#include "opt_ddb.h" 71#include "opt_ddb.h"
72#include "opt_kgdb.h" 72#include "opt_kgdb.h"
73#include "opt_multiprocessor.h" 73#include "opt_multiprocessor.h"
74#include "opt_compat_netbsd.h" 74#include "opt_compat_netbsd.h"
75#include "opt_compat_netbsd32.h" 75#include "opt_compat_netbsd32.h"
76#include "opt_lockdebug.h" 76#include "opt_lockdebug.h"
77 77
78#include "assym.h" 78#include "assym.h"
79#include <machine/param.h> 79#include <machine/param.h>
80#include <sparc64/sparc64/intreg.h> 80#include <sparc64/sparc64/intreg.h>
81#include <sparc64/sparc64/timerreg.h> 81#include <sparc64/sparc64/timerreg.h>
82#include <machine/ctlreg.h> 82#include <machine/ctlreg.h>
83#include <machine/psl.h> 83#include <machine/psl.h>
84#include <machine/signal.h> 84#include <machine/signal.h>
85#include <machine/trap.h> 85#include <machine/trap.h>
86#include <machine/frame.h> 86#include <machine/frame.h>
87#include <machine/pte.h> 87#include <machine/pte.h>
88#include <machine/pmap.h> 88#include <machine/pmap.h>
89#include <machine/intr.h> 89#include <machine/intr.h>
90#include <machine/asm.h> 90#include <machine/asm.h>
91#include <sys/syscall.h> 91#include <sys/syscall.h>
92 92
93#include "ksyms.h" 93#include "ksyms.h"
94 94
95/* A few convenient abbreviations for trapframe fields. */ 95/* A few convenient abbreviations for trapframe fields. */
96#define TF_G TF_GLOBAL 96#define TF_G TF_GLOBAL
97#define TF_O TF_OUT 97#define TF_O TF_OUT
98#define TF_L TF_LOCAL 98#define TF_L TF_LOCAL
99#define TF_I TF_IN 99#define TF_I TF_IN
100 100
101#undef CURLWP 101#undef CURLWP
102#undef CPCB 102#undef CPCB
103#undef FPLWP 103#undef FPLWP
104 104
105#define CURLWP (CPUINFO_VA + CI_CURLWP) 105#define CURLWP (CPUINFO_VA + CI_CURLWP)
106#define CPCB (CPUINFO_VA + CI_CPCB) 106#define CPCB (CPUINFO_VA + CI_CPCB)
107#define FPLWP (CPUINFO_VA + CI_FPLWP) 107#define FPLWP (CPUINFO_VA + CI_FPLWP)
108 108
109/* Let us use same syntax as C code */ 109/* Let us use same syntax as C code */
110#define Debugger() ta 1; nop 110#define Debugger() ta 1; nop
111 111
112#if 1 112#if 1
113/* 113/*
114 * Try to issue an elf note to ask the Solaris 114 * Try to issue an elf note to ask the Solaris
115 * bootloader to align the kernel properly. 115 * bootloader to align the kernel properly.
116 */ 116 */
117 .section .note 117 .section .note
118 .word 0x0d 118 .word 0x0d
119 .word 4 ! Dunno why 119 .word 4 ! Dunno why
120 .word 1 120 .word 1
1210: .asciz "SUNW Solaris" 1210: .asciz "SUNW Solaris"
1221: 1221:
123 .align 4 123 .align 4
124 .word 0x0400000 124 .word 0x0400000
125#endif 125#endif
126 126
127 .register %g2,#scratch 127 .register %g2,#scratch
128 .register %g3,#scratch 128 .register %g3,#scratch
129 129
130/* 130/*
131 * Here are some defines to try to maintain consistency but still 131 * Here are some defines to try to maintain consistency but still
132 * support 32-and 64-bit compilers. 132 * support 32-and 64-bit compilers.
133 */ 133 */
134#ifdef _LP64 134#ifdef _LP64
135/* reg that points to base of data/text segment */ 135/* reg that points to base of data/text segment */
136#define BASEREG %g4 136#define BASEREG %g4
137/* first constants for storage allocation */ 137/* first constants for storage allocation */
138#define LNGSZ 8 138#define LNGSZ 8
139#define LNGSHFT 3 139#define LNGSHFT 3
140#define PTRSZ 8 140#define PTRSZ 8
141#define PTRSHFT 3 141#define PTRSHFT 3
142#define POINTER .xword 142#define POINTER .xword
143#define ULONG .xword 143#define ULONG .xword
144/* Now instructions to load/store pointers & long ints */ 144/* Now instructions to load/store pointers & long ints */
145#define LDLNG ldx 145#define LDLNG ldx
146#define LDULNG ldx 146#define LDULNG ldx
147#define STLNG stx 147#define STLNG stx
148#define STULNG stx 148#define STULNG stx
149#define LDPTR ldx 149#define LDPTR ldx
150#define LDPTRA ldxa 150#define LDPTRA ldxa
151#define STPTR stx 151#define STPTR stx
152#define STPTRA stxa 152#define STPTRA stxa
153#define CASPTR casxa 153#define CASPTR casxa
154/* Now something to calculate the stack bias */ 154/* Now something to calculate the stack bias */
155#define STKB BIAS 155#define STKB BIAS
156#define CCCR %xcc 156#define CCCR %xcc
157#else 157#else
158#define BASEREG %g0 158#define BASEREG %g0
159#define LNGSZ 4 159#define LNGSZ 4
160#define LNGSHFT 2 160#define LNGSHFT 2
161#define PTRSZ 4 161#define PTRSZ 4
162#define PTRSHFT 2 162#define PTRSHFT 2
163#define POINTER .word 163#define POINTER .word
164#define ULONG .word 164#define ULONG .word
165/* Instructions to load/store pointers & long ints */ 165/* Instructions to load/store pointers & long ints */
166#define LDLNG ldsw 166#define LDLNG ldsw
167#define LDULNG lduw 167#define LDULNG lduw
168#define STLNG stw 168#define STLNG stw
169#define STULNG stw 169#define STULNG stw
170#define LDPTR lduw 170#define LDPTR lduw
171#define LDPTRA lduwa 171#define LDPTRA lduwa
172#define STPTR stw 172#define STPTR stw
173#define STPTRA stwa 173#define STPTRA stwa
174#define CASPTR casa 174#define CASPTR casa
175#define STKB 0 175#define STKB 0
176#define CCCR %icc 176#define CCCR %icc
177#endif 177#endif
178 178
179/* 179/*
180 * GNU assembler does not understand `.empty' directive; Sun assembler 180 * GNU assembler does not understand `.empty' directive; Sun assembler
181 * gripes about labels without it. To allow cross-compilation using 181 * gripes about labels without it. To allow cross-compilation using
182 * the Sun assembler, and because .empty directives are useful 182 * the Sun assembler, and because .empty directives are useful
183 * documentation, we use this trick. 183 * documentation, we use this trick.
184 */ 184 */
185#ifdef SUN_AS 185#ifdef SUN_AS
186#define EMPTY .empty 186#define EMPTY .empty
187#else 187#else
188#define EMPTY /* .empty */ 188#define EMPTY /* .empty */
189#endif 189#endif
190 190
191/* use as needed to align things on longword boundaries */ 191/* use as needed to align things on longword boundaries */
192#define _ALIGN .align 8 192#define _ALIGN .align 8
193#define ICACHE_ALIGN .align 32 193#define ICACHE_ALIGN .align 32
194 194
195/* Give this real authority: reset the machine */ 195/* Give this real authority: reset the machine */
196#define NOTREACHED sir 196#define NOTREACHED sir
197 197
198/* 198/*
199 * This macro will clear out a cache line before an explicit 199 * This macro will clear out a cache line before an explicit
200 * access to that location. It's mostly used to make certain 200 * access to that location. It's mostly used to make certain
201 * loads bypassing the D$ do not get stale D$ data. 201 * loads bypassing the D$ do not get stale D$ data.
202 * 202 *
203 * It uses a register with the address to clear and a temporary 203 * It uses a register with the address to clear and a temporary
204 * which is destroyed. 204 * which is destroyed.
205 */ 205 */
206#ifdef DCACHE_BUG 206#ifdef DCACHE_BUG
207#define DLFLUSH(a,t) \ 207#define DLFLUSH(a,t) \
208 andn a, 0x1f, t; \ 208 andn a, 0x1f, t; \
209 stxa %g0, [ t ] ASI_DCACHE_TAG; \ 209 stxa %g0, [ t ] ASI_DCACHE_TAG; \
210 membar #Sync 210 membar #Sync
211/* The following can be used if the pointer is 16-byte aligned */ 211/* The following can be used if the pointer is 16-byte aligned */
212#define DLFLUSH2(t) \ 212#define DLFLUSH2(t) \
213 stxa %g0, [ t ] ASI_DCACHE_TAG; \ 213 stxa %g0, [ t ] ASI_DCACHE_TAG; \
214 membar #Sync 214 membar #Sync
215#else 215#else
216#define DLFLUSH(a,t) 216#define DLFLUSH(a,t)
217#define DLFLUSH2(t) 217#define DLFLUSH2(t)
218#endif 218#endif
219 219
220 220
221/* 221/*
222 * Combine 2 regs -- used to convert 64-bit ILP32 222 * Combine 2 regs -- used to convert 64-bit ILP32
223 * values to LP64. 223 * values to LP64.
224 */ 224 */
225#define COMBINE(r1, r2, d) \ 225#define COMBINE(r1, r2, d) \
226 sllx r1, 32, d; \ 226 sllx r1, 32, d; \
227 or d, r2, d 227 or d, r2, d
228 228
229/* 229/*
230 * Split 64-bit value in 1 reg into high and low halves. 230 * Split 64-bit value in 1 reg into high and low halves.
231 * Used for ILP32 return values. 231 * Used for ILP32 return values.
232 */ 232 */
233#define SPLIT(r0, r1) \ 233#define SPLIT(r0, r1) \
234 srl r0, 0, r1; \ 234 srl r0, 0, r1; \
235 srlx r0, 32, r0 235 srlx r0, 32, r0
236 236
237 237
238/* 238/*
239 * A handy macro for maintaining instrumentation counters. 239 * A handy macro for maintaining instrumentation counters.
240 * Note that this clobbers %o0, %o1 and %o2. Normal usage is 240 * Note that this clobbers %o0, %o1 and %o2. Normal usage is
241 * something like: 241 * something like:
242 * foointr: 242 * foointr:
243 * TRAP_SETUP(...) ! makes %o registers safe 243 * TRAP_SETUP(...) ! makes %o registers safe
244 * INCR(_C_LABEL(cnt)+V_FOO) ! count a foo 244 * INCR(_C_LABEL(cnt)+V_FOO) ! count a foo
245 */ 245 */
246#define INCR(what) \ 246#define INCR(what) \
247 sethi %hi(what), %o0; \ 247 sethi %hi(what), %o0; \
248 or %o0, %lo(what), %o0; \ 248 or %o0, %lo(what), %o0; \
24999: \ 24999: \
250 lduw [%o0], %o1; \ 250 lduw [%o0], %o1; \
251 add %o1, 1, %o2; \ 251 add %o1, 1, %o2; \
252 casa [%o0] ASI_P, %o1, %o2; \ 252 casa [%o0] ASI_P, %o1, %o2; \
253 cmp %o1, %o2; \ 253 cmp %o1, %o2; \
254 bne,pn %icc, 99b; \ 254 bne,pn %icc, 99b; \
255 nop 255 nop
256 256
257/* 257/*
258 * A couple of handy macros to save and restore globals to/from 258 * A couple of handy macros to save and restore globals to/from
259 * locals. Since udivrem uses several globals, and it's called 259 * locals. Since udivrem uses several globals, and it's called
260 * from vsprintf, we need to do this before and after doing a printf. 260 * from vsprintf, we need to do this before and after doing a printf.
261 */ 261 */
262#define GLOBTOLOC \ 262#define GLOBTOLOC \
263 mov %g1, %l1; \ 263 mov %g1, %l1; \
264 mov %g2, %l2; \ 264 mov %g2, %l2; \
265 mov %g3, %l3; \ 265 mov %g3, %l3; \
266 mov %g4, %l4; \ 266 mov %g4, %l4; \
267 mov %g5, %l5; \ 267 mov %g5, %l5; \
268 mov %g6, %l6; \ 268 mov %g6, %l6; \
269 mov %g7, %l7 269 mov %g7, %l7
270 270
271#define LOCTOGLOB \ 271#define LOCTOGLOB \
272 mov %l1, %g1; \ 272 mov %l1, %g1; \
273 mov %l2, %g2; \ 273 mov %l2, %g2; \
274 mov %l3, %g3; \ 274 mov %l3, %g3; \
275 mov %l4, %g4; \ 275 mov %l4, %g4; \
276 mov %l5, %g5; \ 276 mov %l5, %g5; \
277 mov %l6, %g6; \ 277 mov %l6, %g6; \
278 mov %l7, %g7 278 mov %l7, %g7
279 279
280/* Load strings address into register; NOTE: hidden local label 99 */ 280/* Load strings address into register; NOTE: hidden local label 99 */
281#define LOAD_ASCIZ(reg, s) \ 281#define LOAD_ASCIZ(reg, s) \
282 set 99f, reg ; \ 282 set 99f, reg ; \
283 .data ; \ 283 .data ; \
28499: .asciz s ; \ 28499: .asciz s ; \
285 _ALIGN ; \ 285 _ALIGN ; \
286 .text 286 .text
287 287
288/* 288/*
289 * Handy stack conversion macros. 289 * Handy stack conversion macros.
290 * They correctly switch to requested stack type 290 * They correctly switch to requested stack type
291 * regardless of the current stack. 291 * regardless of the current stack.
292 */ 292 */
293 293
294#define TO_STACK64(size) \ 294#define TO_STACK64(size) \
295 save %sp, size, %sp; \ 295 save %sp, size, %sp; \
296 add %sp, -BIAS, %o0; /* Convert to 64-bits */ \ 296 add %sp, -BIAS, %o0; /* Convert to 64-bits */ \
297 andcc %sp, 1, %g0; /* 64-bit stack? */ \ 297 andcc %sp, 1, %g0; /* 64-bit stack? */ \
298 movz %icc, %o0, %sp 298 movz %icc, %o0, %sp
299 299
300#define TO_STACK32(size) \ 300#define TO_STACK32(size) \
301 save %sp, size, %sp; \ 301 save %sp, size, %sp; \
302 add %sp, +BIAS, %o0; /* Convert to 32-bits */ \ 302 add %sp, +BIAS, %o0; /* Convert to 32-bits */ \
303 andcc %sp, 1, %g0; /* 64-bit stack? */ \ 303 andcc %sp, 1, %g0; /* 64-bit stack? */ \
304 movnz %icc, %o0, %sp 304 movnz %icc, %o0, %sp
305 305
306#ifdef _LP64 306#ifdef _LP64
307#define STACKFRAME(size) TO_STACK64(size) 307#define STACKFRAME(size) TO_STACK64(size)
308#else 308#else
309#define STACKFRAME(size) TO_STACK32(size) 309#define STACKFRAME(size) TO_STACK32(size)
310#endif 310#endif
311 311
312#ifdef USE_BLOCK_STORE_LOAD 312#ifdef USE_BLOCK_STORE_LOAD
313/* 313/*
314 * The following routines allow fpu use in the kernel. 314 * The following routines allow fpu use in the kernel.
315 * 315 *
316 * They allocate a stack frame and use all local regs. Extra 316 * They allocate a stack frame and use all local regs. Extra
317 * local storage can be requested by setting the siz parameter, 317 * local storage can be requested by setting the siz parameter,
318 * and can be accessed at %sp+CC64FSZ. 318 * and can be accessed at %sp+CC64FSZ.
319 */ 319 */
320 320
321#define ENABLE_FPU(siz) \ 321#define ENABLE_FPU(siz) \
322 save %sp, -(CC64FSZ), %sp; /* Allocate a stack frame */ \ 322 save %sp, -(CC64FSZ), %sp; /* Allocate a stack frame */ \
323 sethi %hi(FPLWP), %l1; \ 323 sethi %hi(FPLWP), %l1; \
324 add %fp, STKB-FS_SIZE, %l0; /* Allocate a fpstate */ \ 324 add %fp, STKB-FS_SIZE, %l0; /* Allocate a fpstate */ \
325 LDPTR [%l1 + %lo(FPLWP)], %l2; /* Load fplwp */ \ 325 LDPTR [%l1 + %lo(FPLWP)], %l2; /* Load fplwp */ \
326 andn %l0, BLOCK_ALIGN, %l0; /* Align it */ \ 326 andn %l0, BLOCK_ALIGN, %l0; /* Align it */ \
327 clr %l3; /* NULL fpstate */ \ 327 clr %l3; /* NULL fpstate */ \
328 brz,pt %l2, 1f; /* fplwp == NULL? */ \ 328 brz,pt %l2, 1f; /* fplwp == NULL? */ \
329 add %l0, -STKB-CC64FSZ-(siz), %sp; /* Set proper %sp */ \ 329 add %l0, -STKB-CC64FSZ-(siz), %sp; /* Set proper %sp */ \
330 LDPTR [%l2 + L_FPSTATE], %l3; \ 330 LDPTR [%l2 + L_FPSTATE], %l3; \
331 brz,pn %l3, 1f; /* Make sure we have an fpstate */ \ 331 brz,pn %l3, 1f; /* Make sure we have an fpstate */ \
332 mov %l3, %o0; \ 332 mov %l3, %o0; \
333 call _C_LABEL(savefpstate); /* Save the old fpstate */ \ 333 call _C_LABEL(savefpstate); /* Save the old fpstate */ \
3341: \ 3341: \
335 set EINTSTACK-STKB, %l4; /* Are we on intr stack? */ \ 335 set EINTSTACK-STKB, %l4; /* Are we on intr stack? */ \
336 cmp %sp, %l4; \ 336 cmp %sp, %l4; \
337 bgu,pt %xcc, 1f; \ 337 bgu,pt %xcc, 1f; \
338 set INTSTACK-STKB, %l4; \ 338 set INTSTACK-STKB, %l4; \
339 cmp %sp, %l4; \ 339 cmp %sp, %l4; \
340 blu %xcc, 1f; \ 340 blu %xcc, 1f; \
3410: \ 3410: \
342 sethi %hi(_C_LABEL(lwp0)), %l4; /* Yes, use lpw0 */ \ 342 sethi %hi(_C_LABEL(lwp0)), %l4; /* Yes, use lpw0 */ \
343 ba,pt %xcc, 2f; /* XXXX needs to change to CPUs idle proc */ \ 343 ba,pt %xcc, 2f; /* XXXX needs to change to CPUs idle proc */ \
344 or %l4, %lo(_C_LABEL(lwp0)), %l5; \ 344 or %l4, %lo(_C_LABEL(lwp0)), %l5; \
3451: \ 3451: \
346 sethi %hi(CURLWP), %l4; /* Use curlwp */ \ 346 sethi %hi(CURLWP), %l4; /* Use curlwp */ \
347 LDPTR [%l4 + %lo(CURLWP)], %l5; \ 347 LDPTR [%l4 + %lo(CURLWP)], %l5; \
348 brz,pn %l5, 0b; nop; /* If curlwp is NULL need to use lwp0 */ \ 348 brz,pn %l5, 0b; nop; /* If curlwp is NULL need to use lwp0 */ \
3492: \ 3492: \
350 LDPTR [%l5 + L_FPSTATE], %l6; /* Save old fpstate */ \ 350 LDPTR [%l5 + L_FPSTATE], %l6; /* Save old fpstate */ \
351 STPTR %l0, [%l5 + L_FPSTATE]; /* Insert new fpstate */ \ 351 STPTR %l0, [%l5 + L_FPSTATE]; /* Insert new fpstate */ \
352 STPTR %l5, [%l1 + %lo(FPLWP)]; /* Set new fplwp */ \ 352 STPTR %l5, [%l1 + %lo(FPLWP)]; /* Set new fplwp */ \
353 wr %g0, FPRS_FEF, %fprs /* Enable FPU */ 353 wr %g0, FPRS_FEF, %fprs /* Enable FPU */
354 354
355/* 355/*
356 * Weve saved our possible fpstate, now disable the fpu 356 * Weve saved our possible fpstate, now disable the fpu
357 * and continue with life. 357 * and continue with life.
358 */ 358 */
359#ifdef DEBUG 359#ifdef DEBUG
360#define __CHECK_FPU \ 360#define __CHECK_FPU \
361 LDPTR [%l5 + L_FPSTATE], %l7; \ 361 LDPTR [%l5 + L_FPSTATE], %l7; \
362 cmp %l7, %l0; \ 362 cmp %l7, %l0; \
363 tnz 1; 363 tnz 1;
364#else 364#else
365#define __CHECK_FPU 365#define __CHECK_FPU
366#endif 366#endif
367  367
368#define RESTORE_FPU \ 368#define RESTORE_FPU \
369 __CHECK_FPU \ 369 __CHECK_FPU \
370 STPTR %l2, [%l1 + %lo(FPLWP)]; /* Restore old fproc */ \ 370 STPTR %l2, [%l1 + %lo(FPLWP)]; /* Restore old fproc */ \
371 wr %g0, 0, %fprs; /* Disable fpu */ \ 371 wr %g0, 0, %fprs; /* Disable fpu */ \
372 brz,pt %l3, 1f; /* Skip if no fpstate */ \ 372 brz,pt %l3, 1f; /* Skip if no fpstate */ \
373 STPTR %l6, [%l5 + L_FPSTATE]; /* Restore old fpstate */ \ 373 STPTR %l6, [%l5 + L_FPSTATE]; /* Restore old fpstate */ \
374 \ 374 \
375 mov %l3, %o0; \ 375 mov %l3, %o0; \
376 call _C_LABEL(loadfpstate); /* Re-load orig fpstate */ \ 376 call _C_LABEL(loadfpstate); /* Re-load orig fpstate */ \
3771: \ 3771: \
378 membar #Sync; /* Finish all FP ops */ 378 membar #Sync; /* Finish all FP ops */
379 379
380#endif /* USE_BLOCK_STORE_LOAD */ 380#endif /* USE_BLOCK_STORE_LOAD */
381  381
382 382
383 .data 383 .data
384 .globl _C_LABEL(data_start) 384 .globl _C_LABEL(data_start)
385_C_LABEL(data_start): ! Start of data segment 385_C_LABEL(data_start): ! Start of data segment
386#define DATA_START _C_LABEL(data_start) 386#define DATA_START _C_LABEL(data_start)
387 387
388#if 1 388#if 1
389/* XXX this shouldn't be needed... but kernel usually hangs without it */ 389/* XXX this shouldn't be needed... but kernel usually hangs without it */
390 .space USPACE 390 .space USPACE
391#endif 391#endif
392 392
393#ifdef KGDB 393#ifdef KGDB
394/* 394/*
395 * Another item that must be aligned, easiest to put it here. 395 * Another item that must be aligned, easiest to put it here.
396 */ 396 */
397KGDB_STACK_SIZE = 2048 397KGDB_STACK_SIZE = 2048
398 .globl _C_LABEL(kgdb_stack) 398 .globl _C_LABEL(kgdb_stack)
399_C_LABEL(kgdb_stack): 399_C_LABEL(kgdb_stack):
400 .space KGDB_STACK_SIZE ! hope this is enough 400 .space KGDB_STACK_SIZE ! hope this is enough
401#endif 401#endif
402 402
403#ifdef NOTDEF_DEBUG 403#ifdef NOTDEF_DEBUG
404/* 404/*
405 * This stack is used when we detect kernel stack corruption. 405 * This stack is used when we detect kernel stack corruption.
406 */ 406 */
407 .space USPACE 407 .space USPACE
408 .align 16 408 .align 16
409panicstack: 409panicstack:
410#endif 410#endif
411 411
412/* 412/*
413 * romp is the prom entry pointer 413 * romp is the prom entry pointer
414 * romtba is the prom trap table base address 414 * romtba is the prom trap table base address
415 */ 415 */
416 .globl romp 416 .globl romp
417romp: POINTER 0 417romp: POINTER 0
418 .globl romtba 418 .globl romtba
419romtba: POINTER 0 419romtba: POINTER 0
420 420
421 _ALIGN 421 _ALIGN
422 .text 422 .text
423 423
424/* 424/*
425 * The v9 trap frame is stored in the special trap registers. The 425 * The v9 trap frame is stored in the special trap registers. The
426 * register window is only modified on window overflow, underflow, 426 * register window is only modified on window overflow, underflow,
427 * and clean window traps, where it points to the register window 427 * and clean window traps, where it points to the register window
428 * needing service. Traps have space for 8 instructions, except for 428 * needing service. Traps have space for 8 instructions, except for
429 * the window overflow, underflow, and clean window traps which are 429 * the window overflow, underflow, and clean window traps which are
430 * 32 instructions long, large enough to in-line. 430 * 32 instructions long, large enough to in-line.
431 * 431 *
432 * The spitfire CPU (Ultra I) has 4 different sets of global registers. 432 * The spitfire CPU (Ultra I) has 4 different sets of global registers.
433 * (blah blah...) 433 * (blah blah...)
434 * 434 *
435 * I used to generate these numbers by address arithmetic, but gas's 435 * I used to generate these numbers by address arithmetic, but gas's
436 * expression evaluator has about as much sense as your average slug 436 * expression evaluator has about as much sense as your average slug
437 * (oddly enough, the code looks about as slimy too). Thus, all the 437 * (oddly enough, the code looks about as slimy too). Thus, all the
438 * trap numbers are given as arguments to the trap macros. This means 438 * trap numbers are given as arguments to the trap macros. This means
439 * there is one line per trap. Sigh. 439 * there is one line per trap. Sigh.
440 * 440 *
441 * Hardware interrupt vectors can be `linked'---the linkage is to regular 441 * Hardware interrupt vectors can be `linked'---the linkage is to regular
442 * C code---or rewired to fast in-window handlers. The latter are good 442 * C code---or rewired to fast in-window handlers. The latter are good
443 * for unbuffered hardware like the Zilog serial chip and the AMD audio 443 * for unbuffered hardware like the Zilog serial chip and the AMD audio
444 * chip, where many interrupts can be handled trivially with pseudo-DMA 444 * chip, where many interrupts can be handled trivially with pseudo-DMA
445 * or similar. Only one `fast' interrupt can be used per level, however, 445 * or similar. Only one `fast' interrupt can be used per level, however,
446 * and direct and `fast' interrupts are incompatible. Routines in intr.c 446 * and direct and `fast' interrupts are incompatible. Routines in intr.c
447 * handle setting these, with optional paranoia. 447 * handle setting these, with optional paranoia.
448 */ 448 */
449 449
450/* 450/*
451 * TA8 -- trap align for 8 instruction traps 451 * TA8 -- trap align for 8 instruction traps
452 * TA32 -- trap align for 32 instruction traps 452 * TA32 -- trap align for 32 instruction traps
453 */ 453 */
454#define TA8 .align 32 454#define TA8 .align 32
455#define TA32 .align 128 455#define TA32 .align 128
456 456
457/* 457/*
458 * v9 trap macros: 458 * v9 trap macros:
459 * 459 *
460 * We have a problem with v9 traps; we have no registers to put the 460 * We have a problem with v9 traps; we have no registers to put the
461 * trap type into. But we do have a %tt register which already has 461 * trap type into. But we do have a %tt register which already has
462 * that information. Trap types in these macros are all dummys. 462 * that information. Trap types in these macros are all dummys.
463 */ 463 */
464 /* regular vectored traps */ 464 /* regular vectored traps */
465 465
466#if KTR_COMPILE & KTR_TRAP 466#if KTR_COMPILE & KTR_TRAP
467#if 0 467#if 0
468#define TRACEWIN wrpr %g0, PSTATE_KERN|PSTATE_IG, %pstate;\ 468#define TRACEWIN wrpr %g0, PSTATE_KERN|PSTATE_IG, %pstate;\
469 sethi %hi(9f), %g1; ba,pt %icc,ktr_trap_gen; or %g1, %lo(9f), %g1; 9: 469 sethi %hi(9f), %g1; ba,pt %icc,ktr_trap_gen; or %g1, %lo(9f), %g1; 9:
470#else 470#else
471#define TRACEWIN 471#define TRACEWIN
472#endif 472#endif
473#define TRACEFLT sethi %hi(1f), %g1; ba,pt %icc,ktr_trap_gen;\ 473#define TRACEFLT sethi %hi(1f), %g1; ba,pt %icc,ktr_trap_gen;\
474 or %g1, %lo(1f), %g1; 1: 474 or %g1, %lo(1f), %g1; 1:
475#define VTRAP(type, label) \ 475#define VTRAP(type, label) \
476 sethi %hi(label), %g1; ba,pt %icc,ktr_trap_gen;\ 476 sethi %hi(label), %g1; ba,pt %icc,ktr_trap_gen;\
477 or %g1, %lo(label), %g1; NOTREACHED; TA8 477 or %g1, %lo(label), %g1; NOTREACHED; TA8
478#else  478#else
479#define TRACEWIN 479#define TRACEWIN
480#define TRACEFLT 480#define TRACEFLT
481#define VTRAP(type, label) \ 481#define VTRAP(type, label) \
482 ba,a,pt %icc,label; nop; NOTREACHED; TA8 482 ba,a,pt %icc,label; nop; NOTREACHED; TA8
483#endif 483#endif
484 484
485 /* hardware interrupts (can be linked or made `fast') */ 485 /* hardware interrupts (can be linked or made `fast') */
486#define HARDINT4U(lev) \ 486#define HARDINT4U(lev) \
487 VTRAP(lev, _C_LABEL(sparc_interrupt)) 487 VTRAP(lev, _C_LABEL(sparc_interrupt))
488 488
489 /* software interrupts (may not be made direct, sorry---but you 489 /* software interrupts (may not be made direct, sorry---but you
490 should not be using them trivially anyway) */ 490 should not be using them trivially anyway) */
491#define SOFTINT4U(lev, bit) \ 491#define SOFTINT4U(lev, bit) \
492 HARDINT4U(lev) 492 HARDINT4U(lev)
493 493
494 /* traps that just call trap() */ 494 /* traps that just call trap() */
495#define TRAP(type) VTRAP(type, slowtrap) 495#define TRAP(type) VTRAP(type, slowtrap)
496 496
497 /* architecturally undefined traps (cause panic) */ 497 /* architecturally undefined traps (cause panic) */
498#ifndef DEBUG 498#ifndef DEBUG
499#define UTRAP(type) sir; VTRAP(type, slowtrap) 499#define UTRAP(type) sir; VTRAP(type, slowtrap)
500#else 500#else
501#define UTRAP(type) VTRAP(type, slowtrap) 501#define UTRAP(type) VTRAP(type, slowtrap)
502#endif 502#endif
503 503
504 /* software undefined traps (may be replaced) */ 504 /* software undefined traps (may be replaced) */
505#define STRAP(type) VTRAP(type, slowtrap) 505#define STRAP(type) VTRAP(type, slowtrap)
506 506
507/* breakpoint acts differently under kgdb */ 507/* breakpoint acts differently under kgdb */
508#ifdef KGDB 508#ifdef KGDB
509#define BPT VTRAP(T_BREAKPOINT, bpt) 509#define BPT VTRAP(T_BREAKPOINT, bpt)
510#define BPT_KGDB_EXEC VTRAP(T_KGDB_EXEC, bpt) 510#define BPT_KGDB_EXEC VTRAP(T_KGDB_EXEC, bpt)
511#else 511#else
512#define BPT TRAP(T_BREAKPOINT) 512#define BPT TRAP(T_BREAKPOINT)
513#define BPT_KGDB_EXEC TRAP(T_KGDB_EXEC) 513#define BPT_KGDB_EXEC TRAP(T_KGDB_EXEC)
514#endif 514#endif
515 515
516#define SYSCALL VTRAP(0x100, syscall_setup) 516#define SYSCALL VTRAP(0x100, syscall_setup)
517#ifdef notyet 517#ifdef notyet
518#define ZS_INTERRUPT ba,a,pt %icc, zshard; nop; TA8 518#define ZS_INTERRUPT ba,a,pt %icc, zshard; nop; TA8
519#else 519#else
520#define ZS_INTERRUPT4U HARDINT4U(12) 520#define ZS_INTERRUPT4U HARDINT4U(12)
521#endif 521#endif
522 522
523 523
524/* 524/*
525 * Macro to clear %tt so we don't get confused with old traps. 525 * Macro to clear %tt so we don't get confused with old traps.
526 */ 526 */
527#ifdef DEBUG 527#ifdef DEBUG
528#define CLRTT wrpr %g0,0x1ff,%tt 528#define CLRTT wrpr %g0,0x1ff,%tt
529#else 529#else
530#define CLRTT 530#define CLRTT
531#endif 531#endif
532 532
533/* 533/*
534 * Here are some oft repeated traps as macros. 534 * Here are some oft repeated traps as macros.
535 */ 535 */
536 536
537 /* spill a 64-bit register window */ 537 /* spill a 64-bit register window */
538#define SPILL64(label,as) \ 538#define SPILL64(label,as) \
539 TRACEWIN; \ 539 TRACEWIN; \
540label: \ 540label: \
541 wr %g0, as, %asi; \ 541 wr %g0, as, %asi; \
542 stxa %l0, [%sp+BIAS+0x00]%asi; \ 542 stxa %l0, [%sp+BIAS+0x00]%asi; \
543 stxa %l1, [%sp+BIAS+0x08]%asi; \ 543 stxa %l1, [%sp+BIAS+0x08]%asi; \
544 stxa %l2, [%sp+BIAS+0x10]%asi; \ 544 stxa %l2, [%sp+BIAS+0x10]%asi; \
545 stxa %l3, [%sp+BIAS+0x18]%asi; \ 545 stxa %l3, [%sp+BIAS+0x18]%asi; \
546 stxa %l4, [%sp+BIAS+0x20]%asi; \ 546 stxa %l4, [%sp+BIAS+0x20]%asi; \
547 stxa %l5, [%sp+BIAS+0x28]%asi; \ 547 stxa %l5, [%sp+BIAS+0x28]%asi; \
548 stxa %l6, [%sp+BIAS+0x30]%asi; \ 548 stxa %l6, [%sp+BIAS+0x30]%asi; \
549 \ 549 \
550 stxa %l7, [%sp+BIAS+0x38]%asi; \ 550 stxa %l7, [%sp+BIAS+0x38]%asi; \
551 stxa %i0, [%sp+BIAS+0x40]%asi; \ 551 stxa %i0, [%sp+BIAS+0x40]%asi; \
552 stxa %i1, [%sp+BIAS+0x48]%asi; \ 552 stxa %i1, [%sp+BIAS+0x48]%asi; \
553 stxa %i2, [%sp+BIAS+0x50]%asi; \ 553 stxa %i2, [%sp+BIAS+0x50]%asi; \
554 stxa %i3, [%sp+BIAS+0x58]%asi; \ 554 stxa %i3, [%sp+BIAS+0x58]%asi; \
555 stxa %i4, [%sp+BIAS+0x60]%asi; \ 555 stxa %i4, [%sp+BIAS+0x60]%asi; \
556 stxa %i5, [%sp+BIAS+0x68]%asi; \ 556 stxa %i5, [%sp+BIAS+0x68]%asi; \
557 stxa %i6, [%sp+BIAS+0x70]%asi; \ 557 stxa %i6, [%sp+BIAS+0x70]%asi; \
558 \ 558 \
559 stxa %i7, [%sp+BIAS+0x78]%asi; \ 559 stxa %i7, [%sp+BIAS+0x78]%asi; \
560 saved; \ 560 saved; \
561 CLRTT; \ 561 CLRTT; \
562 retry; \ 562 retry; \
563 NOTREACHED; \ 563 NOTREACHED; \
564 TA32 564 TA32
565 565
566 /* spill a 32-bit register window */ 566 /* spill a 32-bit register window */
567#define SPILL32(label,as) \ 567#define SPILL32(label,as) \
568 TRACEWIN; \ 568 TRACEWIN; \
569label: \ 569label: \
570 wr %g0, as, %asi; \ 570 wr %g0, as, %asi; \
571 srl %sp, 0, %sp; /* fixup 32-bit pointers */ \ 571 srl %sp, 0, %sp; /* fixup 32-bit pointers */ \
572 stwa %l0, [%sp+0x00]%asi; \ 572 stwa %l0, [%sp+0x00]%asi; \
573 stwa %l1, [%sp+0x04]%asi; \ 573 stwa %l1, [%sp+0x04]%asi; \
574 stwa %l2, [%sp+0x08]%asi; \ 574 stwa %l2, [%sp+0x08]%asi; \
575 stwa %l3, [%sp+0x0c]%asi; \ 575 stwa %l3, [%sp+0x0c]%asi; \
576 stwa %l4, [%sp+0x10]%asi; \ 576 stwa %l4, [%sp+0x10]%asi; \
577 stwa %l5, [%sp+0x14]%asi; \ 577 stwa %l5, [%sp+0x14]%asi; \
578 \ 578 \
579 stwa %l6, [%sp+0x18]%asi; \ 579 stwa %l6, [%sp+0x18]%asi; \
580 stwa %l7, [%sp+0x1c]%asi; \ 580 stwa %l7, [%sp+0x1c]%asi; \
581 stwa %i0, [%sp+0x20]%asi; \ 581 stwa %i0, [%sp+0x20]%asi; \
582 stwa %i1, [%sp+0x24]%asi; \ 582 stwa %i1, [%sp+0x24]%asi; \
583 stwa %i2, [%sp+0x28]%asi; \ 583 stwa %i2, [%sp+0x28]%asi; \
584 stwa %i3, [%sp+0x2c]%asi; \ 584 stwa %i3, [%sp+0x2c]%asi; \
585 stwa %i4, [%sp+0x30]%asi; \ 585 stwa %i4, [%sp+0x30]%asi; \
586 stwa %i5, [%sp+0x34]%asi; \ 586 stwa %i5, [%sp+0x34]%asi; \
587 \ 587 \
588 stwa %i6, [%sp+0x38]%asi; \ 588 stwa %i6, [%sp+0x38]%asi; \
589 stwa %i7, [%sp+0x3c]%asi; \ 589 stwa %i7, [%sp+0x3c]%asi; \
590 saved; \ 590 saved; \
591 CLRTT; \ 591 CLRTT; \
592 retry; \ 592 retry; \
593 NOTREACHED; \ 593 NOTREACHED; \
594 TA32 594 TA32
595 595
596 /* Spill either 32-bit or 64-bit register window. */ 596 /* Spill either 32-bit or 64-bit register window. */
597#define SPILLBOTH(label64,label32,as) \ 597#define SPILLBOTH(label64,label32,as) \
598 TRACEWIN; \ 598 TRACEWIN; \
599 andcc %sp, 1, %g0; \ 599 andcc %sp, 1, %g0; \
600 bnz,pt %xcc, label64+4; /* Is it a v9 or v8 stack? */ \ 600 bnz,pt %xcc, label64+4; /* Is it a v9 or v8 stack? */ \
601 wr %g0, as, %asi; \ 601 wr %g0, as, %asi; \
602 ba,pt %xcc, label32+8; \ 602 ba,pt %xcc, label32+8; \
603 srl %sp, 0, %sp; /* fixup 32-bit pointers */ \ 603 srl %sp, 0, %sp; /* fixup 32-bit pointers */ \
604 NOTREACHED; \ 604 NOTREACHED; \
605 TA32 605 TA32
606 606
607 /* fill a 64-bit register window */ 607 /* fill a 64-bit register window */
608#define FILL64(label,as) \ 608#define FILL64(label,as) \
609 TRACEWIN; \ 609 TRACEWIN; \
610label: \ 610label: \
611 wr %g0, as, %asi; \ 611 wr %g0, as, %asi; \
612 ldxa [%sp+BIAS+0x00]%asi, %l0; \ 612 ldxa [%sp+BIAS+0x00]%asi, %l0; \
613 ldxa [%sp+BIAS+0x08]%asi, %l1; \ 613 ldxa [%sp+BIAS+0x08]%asi, %l1; \
614 ldxa [%sp+BIAS+0x10]%asi, %l2; \ 614 ldxa [%sp+BIAS+0x10]%asi, %l2; \
615 ldxa [%sp+BIAS+0x18]%asi, %l3; \ 615 ldxa [%sp+BIAS+0x18]%asi, %l3; \
616 ldxa [%sp+BIAS+0x20]%asi, %l4; \ 616 ldxa [%sp+BIAS+0x20]%asi, %l4; \
617 ldxa [%sp+BIAS+0x28]%asi, %l5; \ 617 ldxa [%sp+BIAS+0x28]%asi, %l5; \
618 ldxa [%sp+BIAS+0x30]%asi, %l6; \ 618 ldxa [%sp+BIAS+0x30]%asi, %l6; \
619 \ 619 \
620 ldxa [%sp+BIAS+0x38]%asi, %l7; \ 620 ldxa [%sp+BIAS+0x38]%asi, %l7; \
621 ldxa [%sp+BIAS+0x40]%asi, %i0; \ 621 ldxa [%sp+BIAS+0x40]%asi, %i0; \
622 ldxa [%sp+BIAS+0x48]%asi, %i1; \ 622 ldxa [%sp+BIAS+0x48]%asi, %i1; \
623 ldxa [%sp+BIAS+0x50]%asi, %i2; \ 623 ldxa [%sp+BIAS+0x50]%asi, %i2; \
624 ldxa [%sp+BIAS+0x58]%asi, %i3; \ 624 ldxa [%sp+BIAS+0x58]%asi, %i3; \
625 ldxa [%sp+BIAS+0x60]%asi, %i4; \ 625 ldxa [%sp+BIAS+0x60]%asi, %i4; \
626 ldxa [%sp+BIAS+0x68]%asi, %i5; \ 626 ldxa [%sp+BIAS+0x68]%asi, %i5; \
627 ldxa [%sp+BIAS+0x70]%asi, %i6; \ 627 ldxa [%sp+BIAS+0x70]%asi, %i6; \
628 \ 628 \
629 ldxa [%sp+BIAS+0x78]%asi, %i7; \ 629 ldxa [%sp+BIAS+0x78]%asi, %i7; \
630 restored; \ 630 restored; \
631 CLRTT; \ 631 CLRTT; \
632 retry; \ 632 retry; \
633 NOTREACHED; \ 633 NOTREACHED; \
634 TA32 634 TA32
635 635
636 /* fill a 32-bit register window */ 636 /* fill a 32-bit register window */
637#define FILL32(label,as) \ 637#define FILL32(label,as) \
638 TRACEWIN; \ 638 TRACEWIN; \
639label: \ 639label: \
640 wr %g0, as, %asi; \ 640 wr %g0, as, %asi; \
641 srl %sp, 0, %sp; /* fixup 32-bit pointers */ \ 641 srl %sp, 0, %sp; /* fixup 32-bit pointers */ \
642 lda [%sp+0x00]%asi, %l0; \ 642 lda [%sp+0x00]%asi, %l0; \
643 lda [%sp+0x04]%asi, %l1; \ 643 lda [%sp+0x04]%asi, %l1; \
644 lda [%sp+0x08]%asi, %l2; \ 644 lda [%sp+0x08]%asi, %l2; \
645 lda [%sp+0x0c]%asi, %l3; \ 645 lda [%sp+0x0c]%asi, %l3; \
646 lda [%sp+0x10]%asi, %l4; \ 646 lda [%sp+0x10]%asi, %l4; \
647 lda [%sp+0x14]%asi, %l5; \ 647 lda [%sp+0x14]%asi, %l5; \
648 \ 648 \
649 lda [%sp+0x18]%asi, %l6; \ 649 lda [%sp+0x18]%asi, %l6; \
650 lda [%sp+0x1c]%asi, %l7; \ 650 lda [%sp+0x1c]%asi, %l7; \
651 lda [%sp+0x20]%asi, %i0; \ 651 lda [%sp+0x20]%asi, %i0; \
652 lda [%sp+0x24]%asi, %i1; \ 652 lda [%sp+0x24]%asi, %i1; \
653 lda [%sp+0x28]%asi, %i2; \ 653 lda [%sp+0x28]%asi, %i2; \
654 lda [%sp+0x2c]%asi, %i3; \ 654 lda [%sp+0x2c]%asi, %i3; \
655 lda [%sp+0x30]%asi, %i4; \ 655 lda [%sp+0x30]%asi, %i4; \
656 lda [%sp+0x34]%asi, %i5; \ 656 lda [%sp+0x34]%asi, %i5; \
657 \ 657 \
658 lda [%sp+0x38]%asi, %i6; \ 658 lda [%sp+0x38]%asi, %i6; \
659 lda [%sp+0x3c]%asi, %i7; \ 659 lda [%sp+0x3c]%asi, %i7; \
660 restored; \ 660 restored; \
661 CLRTT; \ 661 CLRTT; \
662 retry; \ 662 retry; \
663 NOTREACHED; \ 663 NOTREACHED; \
664 TA32 664 TA32
665 665
666 /* fill either 32-bit or 64-bit register window. */ 666 /* fill either 32-bit or 64-bit register window. */
667#define FILLBOTH(label64,label32,as) \ 667#define FILLBOTH(label64,label32,as) \
668 TRACEWIN; \ 668 TRACEWIN; \
669 andcc %sp, 1, %i0; \ 669 andcc %sp, 1, %i0; \
670 bnz (label64)+4; /* See if it's a v9 stack or v8 */ \ 670 bnz (label64)+4; /* See if it's a v9 stack or v8 */ \
671 wr %g0, as, %asi; \ 671 wr %g0, as, %asi; \
672 ba (label32)+8; \ 672 ba (label32)+8; \
673 srl %sp, 0, %sp; /* fixup 32-bit pointers */ \ 673 srl %sp, 0, %sp; /* fixup 32-bit pointers */ \
674 NOTREACHED; \ 674 NOTREACHED; \
675 TA32 675 TA32
676 676
677 .globl start, _C_LABEL(kernel_text) 677 .globl start, _C_LABEL(kernel_text)
678 _C_LABEL(kernel_text) = kernel_start ! for kvm_mkdb(8) 678 _C_LABEL(kernel_text) = kernel_start ! for kvm_mkdb(8)
679kernel_start: 679kernel_start:
680 /* Traps from TL=0 -- traps from user mode */ 680 /* Traps from TL=0 -- traps from user mode */
681#ifdef __STDC__ 681#ifdef __STDC__
682#define TABLE(name) user_ ## name 682#define TABLE(name) user_ ## name
683#else 683#else
684#define TABLE(name) user_/**/name 684#define TABLE(name) user_/**/name
685#endif 685#endif
686 .globl _C_LABEL(trapbase) 686 .globl _C_LABEL(trapbase)
687_C_LABEL(trapbase): 687_C_LABEL(trapbase):
688 b dostart; nop; TA8 ! 000 = reserved -- Use it to boot 688 b dostart; nop; TA8 ! 000 = reserved -- Use it to boot
689 /* We should not get the next 5 traps */ 689 /* We should not get the next 5 traps */
690 UTRAP(0x001) ! 001 = POR Reset -- ROM should get this 690 UTRAP(0x001) ! 001 = POR Reset -- ROM should get this
691 UTRAP(0x002) ! 002 = WDR -- ROM should get this 691 UTRAP(0x002) ! 002 = WDR -- ROM should get this
692 UTRAP(0x003) ! 003 = XIR -- ROM should get this 692 UTRAP(0x003) ! 003 = XIR -- ROM should get this
693 UTRAP(0x004) ! 004 = SIR -- ROM should get this 693 UTRAP(0x004) ! 004 = SIR -- ROM should get this
694 UTRAP(0x005) ! 005 = RED state exception 694 UTRAP(0x005) ! 005 = RED state exception
695 UTRAP(0x006); UTRAP(0x007) 695 UTRAP(0x006); UTRAP(0x007)
696 VTRAP(T_INST_EXCEPT, textfault) ! 008 = instr. access exept 696 VTRAP(T_INST_EXCEPT, textfault) ! 008 = instr. access exept
697 VTRAP(T_TEXTFAULT, textfault) ! 009 = instr access MMU miss 697 VTRAP(T_TEXTFAULT, textfault) ! 009 = instr access MMU miss
698 VTRAP(T_INST_ERROR, textfault) ! 00a = instr. access err 698 VTRAP(T_INST_ERROR, textfault) ! 00a = instr. access err
699 UTRAP(0x00b); UTRAP(0x00c); UTRAP(0x00d); UTRAP(0x00e); UTRAP(0x00f) 699 UTRAP(0x00b); UTRAP(0x00c); UTRAP(0x00d); UTRAP(0x00e); UTRAP(0x00f)
700 TRAP(T_ILLINST) ! 010 = illegal instruction 700 TRAP(T_ILLINST) ! 010 = illegal instruction
701 TRAP(T_PRIVINST) ! 011 = privileged instruction 701 TRAP(T_PRIVINST) ! 011 = privileged instruction
702 UTRAP(0x012) ! 012 = unimplemented LDD 702 UTRAP(0x012) ! 012 = unimplemented LDD
703 UTRAP(0x013) ! 013 = unimplemented STD 703 UTRAP(0x013) ! 013 = unimplemented STD
704 UTRAP(0x014); UTRAP(0x015); UTRAP(0x016); UTRAP(0x017); UTRAP(0x018) 704 UTRAP(0x014); UTRAP(0x015); UTRAP(0x016); UTRAP(0x017); UTRAP(0x018)
705 UTRAP(0x019); UTRAP(0x01a); UTRAP(0x01b); UTRAP(0x01c); UTRAP(0x01d) 705 UTRAP(0x019); UTRAP(0x01a); UTRAP(0x01b); UTRAP(0x01c); UTRAP(0x01d)
706 UTRAP(0x01e); UTRAP(0x01f) 706 UTRAP(0x01e); UTRAP(0x01f)
707 TRAP(T_FPDISABLED) ! 020 = fp instr, but EF bit off in psr 707 TRAP(T_FPDISABLED) ! 020 = fp instr, but EF bit off in psr
708 TRAP(T_FP_IEEE_754) ! 021 = ieee 754 exception 708 TRAP(T_FP_IEEE_754) ! 021 = ieee 754 exception
709 TRAP(T_FP_OTHER) ! 022 = other fp exception 709 TRAP(T_FP_OTHER) ! 022 = other fp exception
710 TRAP(T_TAGOF) ! 023 = tag overflow 710 TRAP(T_TAGOF) ! 023 = tag overflow
711 TRACEWIN ! DEBUG -- 4 insns 711 TRACEWIN ! DEBUG -- 4 insns
712 rdpr %cleanwin, %o7 ! 024-027 = clean window trap 712 rdpr %cleanwin, %o7 ! 024-027 = clean window trap
713 inc %o7 ! This handler is in-lined and cannot fault 713 inc %o7 ! This handler is in-lined and cannot fault
714#ifdef DEBUG 714#ifdef DEBUG
715 set 0xbadcafe, %l0 ! DEBUG -- compiler should not rely on zero-ed registers. 715 set 0xbadcafe, %l0 ! DEBUG -- compiler should not rely on zero-ed registers.
716#else 716#else
717 clr %l0 717 clr %l0
718#endif 718#endif
719 wrpr %g0, %o7, %cleanwin ! Nucleus (trap&IRQ) code does not need clean windows 719 wrpr %g0, %o7, %cleanwin ! Nucleus (trap&IRQ) code does not need clean windows
720 720
721 mov %l0,%l1; mov %l0,%l2 ! Clear out %l0-%l8 and %o0-%o8 and inc %cleanwin and done 721 mov %l0,%l1; mov %l0,%l2 ! Clear out %l0-%l8 and %o0-%o8 and inc %cleanwin and done
722 mov %l0,%l3; mov %l0,%l4 722 mov %l0,%l3; mov %l0,%l4
723#if 0 723#if 0
724#ifdef DIAGNOSTIC 724#ifdef DIAGNOSTIC
725 !! 725 !!
726 !! Check the sp redzone 726 !! Check the sp redzone
727 !! 727 !!
728 !! Since we can't spill the current window, we'll just keep 728 !! Since we can't spill the current window, we'll just keep
729 !! track of the frame pointer. Problems occur when the routine 729 !! track of the frame pointer. Problems occur when the routine
730 !! allocates and uses stack storage. 730 !! allocates and uses stack storage.
731 !! 731 !!
732! rdpr %wstate, %l5 ! User stack? 732! rdpr %wstate, %l5 ! User stack?
733! cmp %l5, WSTATE_KERN 733! cmp %l5, WSTATE_KERN
734! bne,pt %icc, 7f 734! bne,pt %icc, 7f
735 sethi %hi(CPCB), %l5 735 sethi %hi(CPCB), %l5
736 LDPTR [%l5 + %lo(CPCB)], %l5 ! If pcb < fp < pcb+sizeof(pcb) 736 LDPTR [%l5 + %lo(CPCB)], %l5 ! If pcb < fp < pcb+sizeof(pcb)
737 inc PCB_SIZE, %l5 ! then we have a stack overflow 737 inc PCB_SIZE, %l5 ! then we have a stack overflow
738 btst %fp, 1 ! 64-bit stack? 738 btst %fp, 1 ! 64-bit stack?
739 sub %fp, %l5, %l7 739 sub %fp, %l5, %l7
740 bnz,a,pt %icc, 1f 740 bnz,a,pt %icc, 1f
741 inc BIAS, %l7 ! Remove BIAS 741 inc BIAS, %l7 ! Remove BIAS
7421: 7421:
743 cmp %l7, PCB_SIZE 743 cmp %l7, PCB_SIZE
744 blu %xcc, cleanwin_overflow 744 blu %xcc, cleanwin_overflow
745#endif 745#endif
746#endif 746#endif
747 mov %l0, %l5 747 mov %l0, %l5
748 mov %l0, %l6; mov %l0, %l7; mov %l0, %o0; mov %l0, %o1 748 mov %l0, %l6; mov %l0, %l7; mov %l0, %o0; mov %l0, %o1
749 749
750 mov %l0, %o2; mov %l0, %o3; mov %l0, %o4; mov %l0, %o5; 750 mov %l0, %o2; mov %l0, %o3; mov %l0, %o4; mov %l0, %o5;
751 mov %l0, %o6; mov %l0, %o7 751 mov %l0, %o6; mov %l0, %o7
752 CLRTT 752 CLRTT
753 retry; nop; NOTREACHED; TA32 753 retry; nop; NOTREACHED; TA32
754 TRAP(T_DIV0) ! 028 = divide by zero 754 TRAP(T_DIV0) ! 028 = divide by zero
755 UTRAP(0x029) ! 029 = internal processor error 755 UTRAP(0x029) ! 029 = internal processor error
756 UTRAP(0x02a); UTRAP(0x02b); UTRAP(0x02c); UTRAP(0x02d); UTRAP(0x02e); UTRAP(0x02f) 756 UTRAP(0x02a); UTRAP(0x02b); UTRAP(0x02c); UTRAP(0x02d); UTRAP(0x02e); UTRAP(0x02f)
757 VTRAP(T_DATAFAULT, winfault) ! 030 = data fetch fault 757 VTRAP(T_DATAFAULT, winfault) ! 030 = data fetch fault
758 UTRAP(0x031) ! 031 = data MMU miss -- no MMU 758 UTRAP(0x031) ! 031 = data MMU miss -- no MMU
759 VTRAP(T_DATA_ERROR, winfault) ! 032 = data access error 759 VTRAP(T_DATA_ERROR, winfault) ! 032 = data access error
760 VTRAP(T_DATA_PROT, winfault) ! 033 = data protection fault 760 VTRAP(T_DATA_PROT, winfault) ! 033 = data protection fault
761 TRAP(T_ALIGN) ! 034 = address alignment error -- we could fix it inline... 761 TRAP(T_ALIGN) ! 034 = address alignment error -- we could fix it inline...
762 TRAP(T_LDDF_ALIGN) ! 035 = LDDF address alignment error -- we could fix it inline... 762 TRAP(T_LDDF_ALIGN) ! 035 = LDDF address alignment error -- we could fix it inline...
763 TRAP(T_STDF_ALIGN) ! 036 = STDF address alignment error -- we could fix it inline... 763 TRAP(T_STDF_ALIGN) ! 036 = STDF address alignment error -- we could fix it inline...
764 TRAP(T_PRIVACT) ! 037 = privileged action 764 TRAP(T_PRIVACT) ! 037 = privileged action
765 UTRAP(0x038); UTRAP(0x039); UTRAP(0x03a); UTRAP(0x03b); UTRAP(0x03c); 765 UTRAP(0x038); UTRAP(0x039); UTRAP(0x03a); UTRAP(0x03b); UTRAP(0x03c);
766 UTRAP(0x03d); UTRAP(0x03e); UTRAP(0x03f); 766 UTRAP(0x03d); UTRAP(0x03e); UTRAP(0x03f);
767 VTRAP(T_ASYNC_ERROR, winfault) ! 040 = data fetch fault 767 VTRAP(T_ASYNC_ERROR, winfault) ! 040 = data fetch fault
768 SOFTINT4U(1, IE_L1) ! 041 = level 1 interrupt 768 SOFTINT4U(1, IE_L1) ! 041 = level 1 interrupt
769 HARDINT4U(2) ! 042 = level 2 interrupt 769 HARDINT4U(2) ! 042 = level 2 interrupt
770 HARDINT4U(3) ! 043 = level 3 interrupt 770 HARDINT4U(3) ! 043 = level 3 interrupt
771 SOFTINT4U(4, IE_L4) ! 044 = level 4 interrupt 771 SOFTINT4U(4, IE_L4) ! 044 = level 4 interrupt
772 HARDINT4U(5) ! 045 = level 5 interrupt 772 HARDINT4U(5) ! 045 = level 5 interrupt
773 SOFTINT4U(6, IE_L6) ! 046 = level 6 interrupt 773 SOFTINT4U(6, IE_L6) ! 046 = level 6 interrupt
774 HARDINT4U(7) ! 047 = level 7 interrupt 774 HARDINT4U(7) ! 047 = level 7 interrupt
775 HARDINT4U(8) ! 048 = level 8 interrupt 775 HARDINT4U(8) ! 048 = level 8 interrupt
776 HARDINT4U(9) ! 049 = level 9 interrupt 776 HARDINT4U(9) ! 049 = level 9 interrupt
777 HARDINT4U(10) ! 04a = level 10 interrupt 777 HARDINT4U(10) ! 04a = level 10 interrupt
778 HARDINT4U(11) ! 04b = level 11 interrupt 778 HARDINT4U(11) ! 04b = level 11 interrupt
779 ZS_INTERRUPT4U ! 04c = level 12 (zs) interrupt 779 ZS_INTERRUPT4U ! 04c = level 12 (zs) interrupt
780 HARDINT4U(13) ! 04d = level 13 interrupt 780 HARDINT4U(13) ! 04d = level 13 interrupt
781 HARDINT4U(14) ! 04e = level 14 interrupt 781 HARDINT4U(14) ! 04e = level 14 interrupt
782 HARDINT4U(15) ! 04f = nonmaskable interrupt 782 HARDINT4U(15) ! 04f = nonmaskable interrupt
783 UTRAP(0x050); UTRAP(0x051); UTRAP(0x052); UTRAP(0x053); UTRAP(0x054); UTRAP(0x055) 783 UTRAP(0x050); UTRAP(0x051); UTRAP(0x052); UTRAP(0x053); UTRAP(0x054); UTRAP(0x055)
784 UTRAP(0x056); UTRAP(0x057); UTRAP(0x058); UTRAP(0x059); UTRAP(0x05a); UTRAP(0x05b) 784 UTRAP(0x056); UTRAP(0x057); UTRAP(0x058); UTRAP(0x059); UTRAP(0x05a); UTRAP(0x05b)
785 UTRAP(0x05c); UTRAP(0x05d); UTRAP(0x05e); UTRAP(0x05f) 785 UTRAP(0x05c); UTRAP(0x05d); UTRAP(0x05e); UTRAP(0x05f)
786 VTRAP(0x060, interrupt_vector); ! 060 = interrupt vector 786 VTRAP(0x060, interrupt_vector); ! 060 = interrupt vector
787 TRAP(T_PA_WATCHPT) ! 061 = physical address data watchpoint 787 TRAP(T_PA_WATCHPT) ! 061 = physical address data watchpoint
788 TRAP(T_VA_WATCHPT) ! 062 = virtual address data watchpoint 788 TRAP(T_VA_WATCHPT) ! 062 = virtual address data watchpoint
789 UTRAP(T_ECCERR) ! We'll implement this one later 789 UTRAP(T_ECCERR) ! We'll implement this one later
790ufast_IMMU_miss: ! 064 = fast instr access MMU miss 790ufast_IMMU_miss: ! 064 = fast instr access MMU miss
791 TRACEFLT ! DEBUG 791 TRACEFLT ! DEBUG
792 ldxa [%g0] ASI_IMMU_8KPTR, %g2 ! Load IMMU 8K TSB pointer 792 ldxa [%g0] ASI_IMMU_8KPTR, %g2 ! Load IMMU 8K TSB pointer
793#ifdef NO_TSB 793#ifdef NO_TSB
794 ba,a %icc, instr_miss 794 ba,a %icc, instr_miss
795#endif 795#endif
796 ldxa [%g0] ASI_IMMU, %g1 ! Load IMMU tag target register 796 ldxa [%g0] ASI_IMMU, %g1 ! Load IMMU tag target register
797 ldda [%g2] ASI_NUCLEUS_QUAD_LDD, %g4 ! Load TSB tag:data into %g4:%g5 797 ldda [%g2] ASI_NUCLEUS_QUAD_LDD, %g4 ! Load TSB tag:data into %g4:%g5
798 brgez,pn %g5, instr_miss ! Entry invalid? Punt 798 brgez,pn %g5, instr_miss ! Entry invalid? Punt
799 cmp %g1, %g4 ! Compare TLB tags 799 cmp %g1, %g4 ! Compare TLB tags
800 bne,pn %xcc, instr_miss ! Got right tag? 800 bne,pn %xcc, instr_miss ! Got right tag?
801 nop 801 nop
802 CLRTT 802 CLRTT
803 stxa %g5, [%g0] ASI_IMMU_DATA_IN ! Enter new mapping 803 stxa %g5, [%g0] ASI_IMMU_DATA_IN ! Enter new mapping
804 retry ! Try new mapping 804 retry ! Try new mapping
8051: 8051:
806 sir 806 sir
807 TA32 807 TA32
808ufast_DMMU_miss: ! 068 = fast data access MMU miss 808ufast_DMMU_miss: ! 068 = fast data access MMU miss
809 TRACEFLT ! DEBUG 809 TRACEFLT ! DEBUG
810 ldxa [%g0] ASI_DMMU_8KPTR, %g2! Load DMMU 8K TSB pointer 810 ldxa [%g0] ASI_DMMU_8KPTR, %g2! Load DMMU 8K TSB pointer
811 811
812#ifdef NO_TSB 812#ifdef NO_TSB
813 ba,a %icc, data_miss 813 ba,a %icc, data_miss
814#endif 814#endif
815 ldxa [%g0] ASI_DMMU, %g1 ! Load DMMU tag target register 815 ldxa [%g0] ASI_DMMU, %g1 ! Load DMMU tag target register
816 ldda [%g2] ASI_NUCLEUS_QUAD_LDD, %g4 ! Load TSB tag and data into %g4 and %g5 816 ldda [%g2] ASI_NUCLEUS_QUAD_LDD, %g4 ! Load TSB tag and data into %g4 and %g5
817 brgez,pn %g5, data_miss ! Entry invalid? Punt 817 brgez,pn %g5, data_miss ! Entry invalid? Punt
818 cmp %g1, %g4 ! Compare TLB tags 818 cmp %g1, %g4 ! Compare TLB tags
819 bnz,pn %xcc, data_miss ! Got right tag? 819 bnz,pn %xcc, data_miss ! Got right tag?
820 nop 820 nop
821 CLRTT 821 CLRTT
822#ifdef TRAPSTATS 822#ifdef TRAPSTATS
823 sethi %hi(_C_LABEL(udhit)), %g1 823 sethi %hi(_C_LABEL(udhit)), %g1
824 lduw [%g1+%lo(_C_LABEL(udhit))], %g2 824 lduw [%g1+%lo(_C_LABEL(udhit))], %g2
825 inc %g2 825 inc %g2
826 stw %g2, [%g1+%lo(_C_LABEL(udhit))] 826 stw %g2, [%g1+%lo(_C_LABEL(udhit))]
827#endif 827#endif
828 stxa %g5, [%g0] ASI_DMMU_DATA_IN ! Enter new mapping 828 stxa %g5, [%g0] ASI_DMMU_DATA_IN ! Enter new mapping
829 retry ! Try new mapping 829 retry ! Try new mapping
8301: 8301:
831 sir 831 sir
832 TA32 832 TA32
833ufast_DMMU_protection: ! 06c = fast data access MMU protection 833ufast_DMMU_protection: ! 06c = fast data access MMU protection
834 TRACEFLT ! DEBUG -- we're perilously close to 32 insns 834 TRACEFLT ! DEBUG -- we're perilously close to 32 insns
835#ifdef TRAPSTATS 835#ifdef TRAPSTATS
836 sethi %hi(_C_LABEL(udprot)), %g1 836 sethi %hi(_C_LABEL(udprot)), %g1
837 lduw [%g1+%lo(_C_LABEL(udprot))], %g2 837 lduw [%g1+%lo(_C_LABEL(udprot))], %g2
838 inc %g2 838 inc %g2
839 stw %g2, [%g1+%lo(_C_LABEL(udprot))] 839 stw %g2, [%g1+%lo(_C_LABEL(udprot))]
840#endif 840#endif
841#ifdef HWREF 841#ifdef HWREF
842 ba,a,pt %xcc, dmmu_write_fault 842 ba,a,pt %xcc, dmmu_write_fault
843#else 843#else
844 ba,a,pt %xcc, winfault 844 ba,a,pt %xcc, winfault
845#endif 845#endif
846 nop 846 nop
847 TA32 847 TA32
848 UTRAP(0x070) ! Implementation dependent traps 848 UTRAP(0x070) ! Implementation dependent traps
849 UTRAP(0x071); UTRAP(0x072); UTRAP(0x073); UTRAP(0x074); UTRAP(0x075); UTRAP(0x076) 849 UTRAP(0x071); UTRAP(0x072); UTRAP(0x073); UTRAP(0x074); UTRAP(0x075); UTRAP(0x076)
850 UTRAP(0x077); UTRAP(0x078); UTRAP(0x079); UTRAP(0x07a); UTRAP(0x07b); UTRAP(0x07c) 850 UTRAP(0x077); UTRAP(0x078); UTRAP(0x079); UTRAP(0x07a); UTRAP(0x07b); UTRAP(0x07c)
851 UTRAP(0x07d); UTRAP(0x07e); UTRAP(0x07f) 851 UTRAP(0x07d); UTRAP(0x07e); UTRAP(0x07f)
852TABLE(uspill): 852TABLE(uspill):
853 SPILL64(uspill8,ASI_AIUS) ! 0x080 spill_0_normal -- used to save user windows in user mode 853 SPILL64(uspill8,ASI_AIUS) ! 0x080 spill_0_normal -- used to save user windows in user mode
854 SPILL32(uspill4,ASI_AIUS) ! 0x084 spill_1_normal 854 SPILL32(uspill4,ASI_AIUS) ! 0x084 spill_1_normal
855 SPILLBOTH(uspill8,uspill4,ASI_AIUS) ! 0x088 spill_2_normal 855 SPILLBOTH(uspill8,uspill4,ASI_AIUS) ! 0x088 spill_2_normal
856 UTRAP(0x08c); TA32 ! 0x08c spill_3_normal 856 UTRAP(0x08c); TA32 ! 0x08c spill_3_normal
857TABLE(kspill): 857TABLE(kspill):
858 SPILL64(kspill8,ASI_N) ! 0x090 spill_4_normal -- used to save supervisor windows 858 SPILL64(kspill8,ASI_N) ! 0x090 spill_4_normal -- used to save supervisor windows
859 SPILL32(kspill4,ASI_N) ! 0x094 spill_5_normal 859 SPILL32(kspill4,ASI_N) ! 0x094 spill_5_normal
860 SPILLBOTH(kspill8,kspill4,ASI_N) ! 0x098 spill_6_normal 860 SPILLBOTH(kspill8,kspill4,ASI_N) ! 0x098 spill_6_normal
861 UTRAP(0x09c); TA32 ! 0x09c spill_7_normal 861 UTRAP(0x09c); TA32 ! 0x09c spill_7_normal
862TABLE(uspillk): 862TABLE(uspillk):
863 SPILL64(uspillk8,ASI_AIUS) ! 0x0a0 spill_0_other -- used to save user windows in supervisor mode 863 SPILL64(uspillk8,ASI_AIUS) ! 0x0a0 spill_0_other -- used to save user windows in supervisor mode
864 SPILL32(uspillk4,ASI_AIUS) ! 0x0a4 spill_1_other 864 SPILL32(uspillk4,ASI_AIUS) ! 0x0a4 spill_1_other
865 SPILLBOTH(uspillk8,uspillk4,ASI_AIUS) ! 0x0a8 spill_2_other 865 SPILLBOTH(uspillk8,uspillk4,ASI_AIUS) ! 0x0a8 spill_2_other
866 UTRAP(0x0ac); TA32 ! 0x0ac spill_3_other 866 UTRAP(0x0ac); TA32 ! 0x0ac spill_3_other
867 UTRAP(0x0b0); TA32 ! 0x0b0 spill_4_other 867 UTRAP(0x0b0); TA32 ! 0x0b0 spill_4_other
868 UTRAP(0x0b4); TA32 ! 0x0b4 spill_5_other 868 UTRAP(0x0b4); TA32 ! 0x0b4 spill_5_other
869 UTRAP(0x0b8); TA32 ! 0x0b8 spill_6_other 869 UTRAP(0x0b8); TA32 ! 0x0b8 spill_6_other
870 UTRAP(0x0bc); TA32 ! 0x0bc spill_7_other 870 UTRAP(0x0bc); TA32 ! 0x0bc spill_7_other
871TABLE(ufill): 871TABLE(ufill):
872 FILL64(ufill8,ASI_AIUS) ! 0x0c0 fill_0_normal -- used to fill windows when running user mode 872 FILL64(ufill8,ASI_AIUS) ! 0x0c0 fill_0_normal -- used to fill windows when running user mode
873 FILL32(ufill4,ASI_AIUS) ! 0x0c4 fill_1_normal 873 FILL32(ufill4,ASI_AIUS) ! 0x0c4 fill_1_normal
874 FILLBOTH(ufill8,ufill4,ASI_AIUS) ! 0x0c8 fill_2_normal 874 FILLBOTH(ufill8,ufill4,ASI_AIUS) ! 0x0c8 fill_2_normal
875 UTRAP(0x0cc); TA32 ! 0x0cc fill_3_normal 875 UTRAP(0x0cc); TA32 ! 0x0cc fill_3_normal
876TABLE(kfill): 876TABLE(kfill):
877 FILL64(kfill8,ASI_N) ! 0x0d0 fill_4_normal -- used to fill windows when running supervisor mode 877 FILL64(kfill8,ASI_N) ! 0x0d0 fill_4_normal -- used to fill windows when running supervisor mode
878 FILL32(kfill4,ASI_N) ! 0x0d4 fill_5_normal 878 FILL32(kfill4,ASI_N) ! 0x0d4 fill_5_normal
879 FILLBOTH(kfill8,kfill4,ASI_N) ! 0x0d8 fill_6_normal 879 FILLBOTH(kfill8,kfill4,ASI_N) ! 0x0d8 fill_6_normal
880 UTRAP(0x0dc); TA32 ! 0x0dc fill_7_normal 880 UTRAP(0x0dc); TA32 ! 0x0dc fill_7_normal
881TABLE(ufillk): 881TABLE(ufillk):
882 FILL64(ufillk8,ASI_AIUS) ! 0x0e0 fill_0_other 882 FILL64(ufillk8,ASI_AIUS) ! 0x0e0 fill_0_other
883 FILL32(ufillk4,ASI_AIUS) ! 0x0e4 fill_1_other 883 FILL32(ufillk4,ASI_AIUS) ! 0x0e4 fill_1_other
884 FILLBOTH(ufillk8,ufillk4,ASI_AIUS) ! 0x0e8 fill_2_other 884 FILLBOTH(ufillk8,ufillk4,ASI_AIUS) ! 0x0e8 fill_2_other
885 UTRAP(0x0ec); TA32 ! 0x0ec fill_3_other 885 UTRAP(0x0ec); TA32 ! 0x0ec fill_3_other
886 UTRAP(0x0f0); TA32 ! 0x0f0 fill_4_other 886 UTRAP(0x0f0); TA32 ! 0x0f0 fill_4_other
887 UTRAP(0x0f4); TA32 ! 0x0f4 fill_5_other 887 UTRAP(0x0f4); TA32 ! 0x0f4 fill_5_other
888 UTRAP(0x0f8); TA32 ! 0x0f8 fill_6_other 888 UTRAP(0x0f8); TA32 ! 0x0f8 fill_6_other
889 UTRAP(0x0fc); TA32 ! 0x0fc fill_7_other 889 UTRAP(0x0fc); TA32 ! 0x0fc fill_7_other
890TABLE(syscall): 890TABLE(syscall):
891 SYSCALL ! 0x100 = sun syscall 891 SYSCALL ! 0x100 = sun syscall
892 BPT ! 0x101 = pseudo breakpoint instruction 892 BPT ! 0x101 = pseudo breakpoint instruction
893 STRAP(0x102); STRAP(0x103); STRAP(0x104); STRAP(0x105); STRAP(0x106); STRAP(0x107) 893 STRAP(0x102); STRAP(0x103); STRAP(0x104); STRAP(0x105); STRAP(0x106); STRAP(0x107)
894 SYSCALL ! 0x108 = svr4 syscall 894 SYSCALL ! 0x108 = svr4 syscall
895 SYSCALL ! 0x109 = bsd syscall 895 SYSCALL ! 0x109 = bsd syscall
896 BPT_KGDB_EXEC ! 0x10a = enter kernel gdb on kernel startup 896 BPT_KGDB_EXEC ! 0x10a = enter kernel gdb on kernel startup
897 STRAP(0x10b); STRAP(0x10c); STRAP(0x10d); STRAP(0x10e); STRAP(0x10f); 897 STRAP(0x10b); STRAP(0x10c); STRAP(0x10d); STRAP(0x10e); STRAP(0x10f);
898 STRAP(0x110); STRAP(0x111); STRAP(0x112); STRAP(0x113); STRAP(0x114); STRAP(0x115); STRAP(0x116); STRAP(0x117) 898 STRAP(0x110); STRAP(0x111); STRAP(0x112); STRAP(0x113); STRAP(0x114); STRAP(0x115); STRAP(0x116); STRAP(0x117)
899 STRAP(0x118); STRAP(0x119); STRAP(0x11a); STRAP(0x11b); STRAP(0x11c); STRAP(0x11d); STRAP(0x11e); STRAP(0x11f) 899 STRAP(0x118); STRAP(0x119); STRAP(0x11a); STRAP(0x11b); STRAP(0x11c); STRAP(0x11d); STRAP(0x11e); STRAP(0x11f)
900 STRAP(0x120); STRAP(0x121); STRAP(0x122); STRAP(0x123); STRAP(0x124); STRAP(0x125); STRAP(0x126); STRAP(0x127) 900 STRAP(0x120); STRAP(0x121); STRAP(0x122); STRAP(0x123); STRAP(0x124); STRAP(0x125); STRAP(0x126); STRAP(0x127)
901 STRAP(0x128); STRAP(0x129); STRAP(0x12a); STRAP(0x12b); STRAP(0x12c); STRAP(0x12d); STRAP(0x12e); STRAP(0x12f) 901 STRAP(0x128); STRAP(0x129); STRAP(0x12a); STRAP(0x12b); STRAP(0x12c); STRAP(0x12d); STRAP(0x12e); STRAP(0x12f)
902 STRAP(0x130); STRAP(0x131); STRAP(0x132); STRAP(0x133); STRAP(0x134); STRAP(0x135); STRAP(0x136); STRAP(0x137) 902 STRAP(0x130); STRAP(0x131); STRAP(0x132); STRAP(0x133); STRAP(0x134); STRAP(0x135); STRAP(0x136); STRAP(0x137)
903 STRAP(0x138); STRAP(0x139); STRAP(0x13a); STRAP(0x13b); STRAP(0x13c); STRAP(0x13d); STRAP(0x13e); STRAP(0x13f) 903 STRAP(0x138); STRAP(0x139); STRAP(0x13a); STRAP(0x13b); STRAP(0x13c); STRAP(0x13d); STRAP(0x13e); STRAP(0x13f)
904 SYSCALL ! 0x140 SVID syscall (Solaris 2.7) 904 SYSCALL ! 0x140 SVID syscall (Solaris 2.7)
905 SYSCALL ! 0x141 SPARC International syscall 905 SYSCALL ! 0x141 SPARC International syscall
906 SYSCALL ! 0x142 OS Vendor syscall 906 SYSCALL ! 0x142 OS Vendor syscall
907 SYSCALL ! 0x143 HW OEM syscall 907 SYSCALL ! 0x143 HW OEM syscall
908 STRAP(0x144); STRAP(0x145); STRAP(0x146); STRAP(0x147) 908 STRAP(0x144); STRAP(0x145); STRAP(0x146); STRAP(0x147)
909 STRAP(0x148); STRAP(0x149); STRAP(0x14a); STRAP(0x14b); STRAP(0x14c); STRAP(0x14d); STRAP(0x14e); STRAP(0x14f) 909 STRAP(0x148); STRAP(0x149); STRAP(0x14a); STRAP(0x14b); STRAP(0x14c); STRAP(0x14d); STRAP(0x14e); STRAP(0x14f)
910 STRAP(0x150); STRAP(0x151); STRAP(0x152); STRAP(0x153); STRAP(0x154); STRAP(0x155); STRAP(0x156); STRAP(0x157) 910 STRAP(0x150); STRAP(0x151); STRAP(0x152); STRAP(0x153); STRAP(0x154); STRAP(0x155); STRAP(0x156); STRAP(0x157)
911 STRAP(0x158); STRAP(0x159); STRAP(0x15a); STRAP(0x15b); STRAP(0x15c); STRAP(0x15d); STRAP(0x15e); STRAP(0x15f) 911 STRAP(0x158); STRAP(0x159); STRAP(0x15a); STRAP(0x15b); STRAP(0x15c); STRAP(0x15d); STRAP(0x15e); STRAP(0x15f)
912 STRAP(0x160); STRAP(0x161); STRAP(0x162); STRAP(0x163); STRAP(0x164); STRAP(0x165); STRAP(0x166); STRAP(0x167) 912 STRAP(0x160); STRAP(0x161); STRAP(0x162); STRAP(0x163); STRAP(0x164); STRAP(0x165); STRAP(0x166); STRAP(0x167)
913 STRAP(0x168); STRAP(0x169); STRAP(0x16a); STRAP(0x16b); STRAP(0x16c); STRAP(0x16d); STRAP(0x16e); STRAP(0x16f) 913 STRAP(0x168); STRAP(0x169); STRAP(0x16a); STRAP(0x16b); STRAP(0x16c); STRAP(0x16d); STRAP(0x16e); STRAP(0x16f)
914 STRAP(0x170); STRAP(0x171); STRAP(0x172); STRAP(0x173); STRAP(0x174); STRAP(0x175); STRAP(0x176); STRAP(0x177) 914 STRAP(0x170); STRAP(0x171); STRAP(0x172); STRAP(0x173); STRAP(0x174); STRAP(0x175); STRAP(0x176); STRAP(0x177)
915 STRAP(0x178); STRAP(0x179); STRAP(0x17a); STRAP(0x17b); STRAP(0x17c); STRAP(0x17d); STRAP(0x17e); STRAP(0x17f) 915 STRAP(0x178); STRAP(0x179); STRAP(0x17a); STRAP(0x17b); STRAP(0x17c); STRAP(0x17d); STRAP(0x17e); STRAP(0x17f)
916 ! Traps beyond 0x17f are reserved 916 ! Traps beyond 0x17f are reserved
917 UTRAP(0x180); UTRAP(0x181); UTRAP(0x182); UTRAP(0x183); UTRAP(0x184); UTRAP(0x185); UTRAP(0x186); UTRAP(0x187) 917 UTRAP(0x180); UTRAP(0x181); UTRAP(0x182); UTRAP(0x183); UTRAP(0x184); UTRAP(0x185); UTRAP(0x186); UTRAP(0x187)
918 UTRAP(0x188); UTRAP(0x189); UTRAP(0x18a); UTRAP(0x18b); UTRAP(0x18c); UTRAP(0x18d); UTRAP(0x18e); UTRAP(0x18f) 918 UTRAP(0x188); UTRAP(0x189); UTRAP(0x18a); UTRAP(0x18b); UTRAP(0x18c); UTRAP(0x18d); UTRAP(0x18e); UTRAP(0x18f)
919 UTRAP(0x190); UTRAP(0x191); UTRAP(0x192); UTRAP(0x193); UTRAP(0x194); UTRAP(0x195); UTRAP(0x196); UTRAP(0x197) 919 UTRAP(0x190); UTRAP(0x191); UTRAP(0x192); UTRAP(0x193); UTRAP(0x194); UTRAP(0x195); UTRAP(0x196); UTRAP(0x197)
920 UTRAP(0x198); UTRAP(0x199); UTRAP(0x19a); UTRAP(0x19b); UTRAP(0x19c); UTRAP(0x19d); UTRAP(0x19e); UTRAP(0x19f) 920 UTRAP(0x198); UTRAP(0x199); UTRAP(0x19a); UTRAP(0x19b); UTRAP(0x19c); UTRAP(0x19d); UTRAP(0x19e); UTRAP(0x19f)
921 UTRAP(0x1a0); UTRAP(0x1a1); UTRAP(0x1a2); UTRAP(0x1a3); UTRAP(0x1a4); UTRAP(0x1a5); UTRAP(0x1a6); UTRAP(0x1a7) 921 UTRAP(0x1a0); UTRAP(0x1a1); UTRAP(0x1a2); UTRAP(0x1a3); UTRAP(0x1a4); UTRAP(0x1a5); UTRAP(0x1a6); UTRAP(0x1a7)
922 UTRAP(0x1a8); UTRAP(0x1a9); UTRAP(0x1aa); UTRAP(0x1ab); UTRAP(0x1ac); UTRAP(0x1ad); UTRAP(0x1ae); UTRAP(0x1af) 922 UTRAP(0x1a8); UTRAP(0x1a9); UTRAP(0x1aa); UTRAP(0x1ab); UTRAP(0x1ac); UTRAP(0x1ad); UTRAP(0x1ae); UTRAP(0x1af)
923 UTRAP(0x1b0); UTRAP(0x1b1); UTRAP(0x1b2); UTRAP(0x1b3); UTRAP(0x1b4); UTRAP(0x1b5); UTRAP(0x1b6); UTRAP(0x1b7) 923 UTRAP(0x1b0); UTRAP(0x1b1); UTRAP(0x1b2); UTRAP(0x1b3); UTRAP(0x1b4); UTRAP(0x1b5); UTRAP(0x1b6); UTRAP(0x1b7)
924 UTRAP(0x1b8); UTRAP(0x1b9); UTRAP(0x1ba); UTRAP(0x1bb); UTRAP(0x1bc); UTRAP(0x1bd); UTRAP(0x1be); UTRAP(0x1bf) 924 UTRAP(0x1b8); UTRAP(0x1b9); UTRAP(0x1ba); UTRAP(0x1bb); UTRAP(0x1bc); UTRAP(0x1bd); UTRAP(0x1be); UTRAP(0x1bf)
925 UTRAP(0x1c0); UTRAP(0x1c1); UTRAP(0x1c2); UTRAP(0x1c3); UTRAP(0x1c4); UTRAP(0x1c5); UTRAP(0x1c6); UTRAP(0x1c7) 925 UTRAP(0x1c0); UTRAP(0x1c1); UTRAP(0x1c2); UTRAP(0x1c3); UTRAP(0x1c4); UTRAP(0x1c5); UTRAP(0x1c6); UTRAP(0x1c7)
926 UTRAP(0x1c8); UTRAP(0x1c9); UTRAP(0x1ca); UTRAP(0x1cb); UTRAP(0x1cc); UTRAP(0x1cd); UTRAP(0x1ce); UTRAP(0x1cf) 926 UTRAP(0x1c8); UTRAP(0x1c9); UTRAP(0x1ca); UTRAP(0x1cb); UTRAP(0x1cc); UTRAP(0x1cd); UTRAP(0x1ce); UTRAP(0x1cf)
927 UTRAP(0x1d0); UTRAP(0x1d1); UTRAP(0x1d2); UTRAP(0x1d3); UTRAP(0x1d4); UTRAP(0x1d5); UTRAP(0x1d6); UTRAP(0x1d7) 927 UTRAP(0x1d0); UTRAP(0x1d1); UTRAP(0x1d2); UTRAP(0x1d3); UTRAP(0x1d4); UTRAP(0x1d5); UTRAP(0x1d6); UTRAP(0x1d7)
928 UTRAP(0x1d8); UTRAP(0x1d9); UTRAP(0x1da); UTRAP(0x1db); UTRAP(0x1dc); UTRAP(0x1dd); UTRAP(0x1de); UTRAP(0x1df) 928 UTRAP(0x1d8); UTRAP(0x1d9); UTRAP(0x1da); UTRAP(0x1db); UTRAP(0x1dc); UTRAP(0x1dd); UTRAP(0x1de); UTRAP(0x1df)
929 UTRAP(0x1e0); UTRAP(0x1e1); UTRAP(0x1e2); UTRAP(0x1e3); UTRAP(0x1e4); UTRAP(0x1e5); UTRAP(0x1e6); UTRAP(0x1e7) 929 UTRAP(0x1e0); UTRAP(0x1e1); UTRAP(0x1e2); UTRAP(0x1e3); UTRAP(0x1e4); UTRAP(0x1e5); UTRAP(0x1e6); UTRAP(0x1e7)
930 UTRAP(0x1e8); UTRAP(0x1e9); UTRAP(0x1ea); UTRAP(0x1eb); UTRAP(0x1ec); UTRAP(0x1ed); UTRAP(0x1ee); UTRAP(0x1ef) 930 UTRAP(0x1e8); UTRAP(0x1e9); UTRAP(0x1ea); UTRAP(0x1eb); UTRAP(0x1ec); UTRAP(0x1ed); UTRAP(0x1ee); UTRAP(0x1ef)
931 UTRAP(0x1f0); UTRAP(0x1f1); UTRAP(0x1f2); UTRAP(0x1f3); UTRAP(0x1f4); UTRAP(0x1f5); UTRAP(0x1f6); UTRAP(0x1f7) 931 UTRAP(0x1f0); UTRAP(0x1f1); UTRAP(0x1f2); UTRAP(0x1f3); UTRAP(0x1f4); UTRAP(0x1f5); UTRAP(0x1f6); UTRAP(0x1f7)
932 UTRAP(0x1f8); UTRAP(0x1f9); UTRAP(0x1fa); UTRAP(0x1fb); UTRAP(0x1fc); UTRAP(0x1fd); UTRAP(0x1fe); UTRAP(0x1ff) 932 UTRAP(0x1f8); UTRAP(0x1f9); UTRAP(0x1fa); UTRAP(0x1fb); UTRAP(0x1fc); UTRAP(0x1fd); UTRAP(0x1fe); UTRAP(0x1ff)
933 933
934 /* Traps from TL>0 -- traps from supervisor mode */ 934 /* Traps from TL>0 -- traps from supervisor mode */
935#undef TABLE 935#undef TABLE
936#ifdef __STDC__ 936#ifdef __STDC__
937#define TABLE(name) nucleus_ ## name 937#define TABLE(name) nucleus_ ## name
938#else 938#else
939#define TABLE(name) nucleus_/**/name 939#define TABLE(name) nucleus_/**/name
940#endif 940#endif
941trapbase_priv: 941trapbase_priv:
942 UTRAP(0x000) ! 000 = reserved -- Use it to boot 942 UTRAP(0x000) ! 000 = reserved -- Use it to boot
943 /* We should not get the next 5 traps */ 943 /* We should not get the next 5 traps */
944 UTRAP(0x001) ! 001 = POR Reset -- ROM should get this 944 UTRAP(0x001) ! 001 = POR Reset -- ROM should get this
945 UTRAP(0x002) ! 002 = WDR Watchdog -- ROM should get this 945 UTRAP(0x002) ! 002 = WDR Watchdog -- ROM should get this
946 UTRAP(0x003) ! 003 = XIR -- ROM should get this 946 UTRAP(0x003) ! 003 = XIR -- ROM should get this
947 UTRAP(0x004) ! 004 = SIR -- ROM should get this 947 UTRAP(0x004) ! 004 = SIR -- ROM should get this
948 UTRAP(0x005) ! 005 = RED state exception 948 UTRAP(0x005) ! 005 = RED state exception
949 UTRAP(0x006); UTRAP(0x007) 949 UTRAP(0x006); UTRAP(0x007)
950ktextfault: 950ktextfault:
951 VTRAP(T_INST_EXCEPT, textfault) ! 008 = instr. access exept 951 VTRAP(T_INST_EXCEPT, textfault) ! 008 = instr. access exept
952 VTRAP(T_TEXTFAULT, textfault) ! 009 = instr access MMU miss -- no MMU 952 VTRAP(T_TEXTFAULT, textfault) ! 009 = instr access MMU miss -- no MMU
953 VTRAP(T_INST_ERROR, textfault) ! 00a = instr. access err 953 VTRAP(T_INST_ERROR, textfault) ! 00a = instr. access err
954 UTRAP(0x00b); UTRAP(0x00c); UTRAP(0x00d); UTRAP(0x00e); UTRAP(0x00f) 954 UTRAP(0x00b); UTRAP(0x00c); UTRAP(0x00d); UTRAP(0x00e); UTRAP(0x00f)
955 TRAP(T_ILLINST) ! 010 = illegal instruction 955 TRAP(T_ILLINST) ! 010 = illegal instruction
956 TRAP(T_PRIVINST) ! 011 = privileged instruction 956 TRAP(T_PRIVINST) ! 011 = privileged instruction
957 UTRAP(0x012) ! 012 = unimplemented LDD 957 UTRAP(0x012) ! 012 = unimplemented LDD
958 UTRAP(0x013) ! 013 = unimplemented STD 958 UTRAP(0x013) ! 013 = unimplemented STD
959 UTRAP(0x014); UTRAP(0x015); UTRAP(0x016); UTRAP(0x017); UTRAP(0x018) 959 UTRAP(0x014); UTRAP(0x015); UTRAP(0x016); UTRAP(0x017); UTRAP(0x018)
960 UTRAP(0x019); UTRAP(0x01a); UTRAP(0x01b); UTRAP(0x01c); UTRAP(0x01d) 960 UTRAP(0x019); UTRAP(0x01a); UTRAP(0x01b); UTRAP(0x01c); UTRAP(0x01d)
961 UTRAP(0x01e); UTRAP(0x01f) 961 UTRAP(0x01e); UTRAP(0x01f)
962 TRAP(T_FPDISABLED) ! 020 = fp instr, but EF bit off in psr 962 TRAP(T_FPDISABLED) ! 020 = fp instr, but EF bit off in psr
963 TRAP(T_FP_IEEE_754) ! 021 = ieee 754 exception 963 TRAP(T_FP_IEEE_754) ! 021 = ieee 754 exception
964 TRAP(T_FP_OTHER) ! 022 = other fp exception 964 TRAP(T_FP_OTHER) ! 022 = other fp exception
965 TRAP(T_TAGOF) ! 023 = tag overflow 965 TRAP(T_TAGOF) ! 023 = tag overflow
966 TRACEWIN ! DEBUG 966 TRACEWIN ! DEBUG
967 clr %l0 967 clr %l0
968#ifdef DEBUG 968#ifdef DEBUG
969 set 0xbadbeef, %l0 ! DEBUG 969 set 0xbadbeef, %l0 ! DEBUG
970#endif 970#endif
971 mov %l0, %l1; mov %l0, %l2 ! 024-027 = clean window trap 971 mov %l0, %l1; mov %l0, %l2 ! 024-027 = clean window trap
972 rdpr %cleanwin, %o7 ! This handler is in-lined and cannot fault 972 rdpr %cleanwin, %o7 ! This handler is in-lined and cannot fault
973 inc %o7; mov %l0, %l3 ! Nucleus (trap&IRQ) code does not need clean windows 973 inc %o7; mov %l0, %l3 ! Nucleus (trap&IRQ) code does not need clean windows
974 wrpr %g0, %o7, %cleanwin ! Clear out %l0-%l8 and %o0-%o8 and inc %cleanwin and done 974 wrpr %g0, %o7, %cleanwin ! Clear out %l0-%l8 and %o0-%o8 and inc %cleanwin and done
975#ifdef NOT_DEBUG 975#ifdef NOT_DEBUG
976 !! 976 !!
977 !! Check the sp redzone 977 !! Check the sp redzone
978 !! 978 !!
979 rdpr %wstate, t1 979 rdpr %wstate, t1
980 cmp t1, WSTATE_KERN 980 cmp t1, WSTATE_KERN
981 bne,pt icc, 7f 981 bne,pt icc, 7f
982 sethi %hi(_C_LABEL(redzone)), t1 982 sethi %hi(_C_LABEL(redzone)), t1
983 ldx [t1 + %lo(_C_LABEL(redzone))], t2 983 ldx [t1 + %lo(_C_LABEL(redzone))], t2
984 cmp %sp, t2 ! if sp >= t2, not in red zone 984 cmp %sp, t2 ! if sp >= t2, not in red zone
985 blu panic_red ! and can continue normally 985 blu panic_red ! and can continue normally
9867: 9867:
987#endif 987#endif
988 mov %l0, %l4; mov %l0, %l5; mov %l0, %l6; mov %l0, %l7 988 mov %l0, %l4; mov %l0, %l5; mov %l0, %l6; mov %l0, %l7
989 mov %l0, %o0; mov %l0, %o1; mov %l0, %o2; mov %l0, %o3 989 mov %l0, %o0; mov %l0, %o1; mov %l0, %o2; mov %l0, %o3
990 990
991 mov %l0, %o4; mov %l0, %o5; mov %l0, %o6; mov %l0, %o7 991 mov %l0, %o4; mov %l0, %o5; mov %l0, %o6; mov %l0, %o7
992 CLRTT 992 CLRTT
993 retry; nop; TA32 993 retry; nop; TA32
994 TRAP(T_DIV0) ! 028 = divide by zero 994 TRAP(T_DIV0) ! 028 = divide by zero
995 UTRAP(0x029) ! 029 = internal processor error 995 UTRAP(0x029) ! 029 = internal processor error
996 UTRAP(0x02a); UTRAP(0x02b); UTRAP(0x02c); UTRAP(0x02d); UTRAP(0x02e); UTRAP(0x02f) 996 UTRAP(0x02a); UTRAP(0x02b); UTRAP(0x02c); UTRAP(0x02d); UTRAP(0x02e); UTRAP(0x02f)
997kdatafault: 997kdatafault:
998 VTRAP(T_DATAFAULT, winfault) ! 030 = data fetch fault 998 VTRAP(T_DATAFAULT, winfault) ! 030 = data fetch fault
999 UTRAP(0x031) ! 031 = data MMU miss -- no MMU 999 UTRAP(0x031) ! 031 = data MMU miss -- no MMU
1000 VTRAP(T_DATA_ERROR, winfault) ! 032 = data fetch fault 1000 VTRAP(T_DATA_ERROR, winfault) ! 032 = data fetch fault
@@ -2821,3619 +2821,3651 @@ instr_miss: @@ -2821,3619 +2821,3651 @@ instr_miss:
2821 nop 2821 nop
2822 2822
2823 sll %g5, 3, %g5 2823 sll %g5, 3, %g5
2824 add %g6, %g4, %g4 2824 add %g6, %g4, %g4
2825 ldxa [%g4] ASI_PHYS_CACHED, %g4 2825 ldxa [%g4] ASI_PHYS_CACHED, %g4
2826 srlx %g3, PTSHIFT, %g6 ! Convert to ptab offset 2826 srlx %g3, PTSHIFT, %g6 ! Convert to ptab offset
2827 and %g6, PTMASK, %g6 2827 and %g6, PTMASK, %g6
2828 add %g5, %g4, %g5 2828 add %g5, %g4, %g5
2829 brz,pn %g4, textfault ! NULL entry? check somewhere else 2829 brz,pn %g4, textfault ! NULL entry? check somewhere else
2830 nop 2830 nop
2831  2831
2832 ldxa [%g5] ASI_PHYS_CACHED, %g4 2832 ldxa [%g5] ASI_PHYS_CACHED, %g4
2833 sll %g6, 3, %g6 2833 sll %g6, 3, %g6
2834 brz,pn %g4, textfault ! NULL entry? check somewhere else 2834 brz,pn %g4, textfault ! NULL entry? check somewhere else
2835 add %g6, %g4, %g6  2835 add %g6, %g4, %g6
28361: 28361:
2837 ldxa [%g6] ASI_PHYS_CACHED, %g4 2837 ldxa [%g6] ASI_PHYS_CACHED, %g4
2838 brgez,pn %g4, textfault 2838 brgez,pn %g4, textfault
2839 nop 2839 nop
2840 2840
2841 /* Check if it's an executable mapping. */ 2841 /* Check if it's an executable mapping. */
2842 andcc %g4, TTE_EXEC, %g0 2842 andcc %g4, TTE_EXEC, %g0
2843 bz,pn %xcc, textfault 2843 bz,pn %xcc, textfault
2844 nop 2844 nop
2845 2845
2846 or %g4, TTE_ACCESS, %g7 ! Update accessed bit 2846 or %g4, TTE_ACCESS, %g7 ! Update accessed bit
2847 btst TTE_ACCESS, %g4 ! Need to update access git? 2847 btst TTE_ACCESS, %g4 ! Need to update access git?
2848 bne,pt %xcc, 1f 2848 bne,pt %xcc, 1f
2849 nop 2849 nop
2850 casxa [%g6] ASI_PHYS_CACHED, %g4, %g7 ! and store it 2850 casxa [%g6] ASI_PHYS_CACHED, %g4, %g7 ! and store it
2851 cmp %g4, %g7 2851 cmp %g4, %g7
2852 bne,pn %xcc, 1b 2852 bne,pn %xcc, 1b
2853 or %g4, TTE_ACCESS, %g4 ! Update accessed bit 2853 or %g4, TTE_ACCESS, %g4 ! Update accessed bit
28541: 28541:
2855 stx %g1, [%g2] ! Update TSB entry tag 2855 stx %g1, [%g2] ! Update TSB entry tag
2856 stx %g4, [%g2+8] ! Update TSB entry data 2856 stx %g4, [%g2+8] ! Update TSB entry data
2857#ifdef DEBUG 2857#ifdef DEBUG
2858 set DATA_START, %g6 ! debug 2858 set DATA_START, %g6 ! debug
2859 stx %g3, [%g6+8] ! debug 2859 stx %g3, [%g6+8] ! debug
2860 set 0xaa, %g3 ! debug 2860 set 0xaa, %g3 ! debug
2861 stx %g4, [%g6] ! debug -- what we tried to enter in TLB 2861 stx %g4, [%g6] ! debug -- what we tried to enter in TLB
2862 stb %g3, [%g6+0x20] ! debug 2862 stb %g3, [%g6+0x20] ! debug
2863#endif 2863#endif
2864 stxa %g4, [%g0] ASI_IMMU_DATA_IN ! Enter new mapping 2864 stxa %g4, [%g0] ASI_IMMU_DATA_IN ! Enter new mapping
2865 membar #Sync 2865 membar #Sync
2866 CLRTT 2866 CLRTT
2867 retry 2867 retry
2868 NOTREACHED 2868 NOTREACHED
2869 !! 2869 !!
2870 !! Check our prom mappings -- temporary 2870 !! Check our prom mappings -- temporary
2871 !! 2871 !!
2872 2872
2873/* 2873/*
2874 * Each memory text access fault, from user or kernel mode, 2874 * Each memory text access fault, from user or kernel mode,
2875 * comes here. 2875 * comes here.
2876 * 2876 *
2877 * We will assume that %pil is not lost so we won't bother to save it 2877 * We will assume that %pil is not lost so we won't bother to save it
2878 * unless we're in an interrupt handler. 2878 * unless we're in an interrupt handler.
2879 * 2879 *
2880 * On entry: 2880 * On entry:
2881 * We are on one of the alternate set of globals 2881 * We are on one of the alternate set of globals
2882 * %g1 = MMU tag target 2882 * %g1 = MMU tag target
2883 * %g2 = %tl 2883 * %g2 = %tl
2884 * %g3 = %tl - 1 2884 * %g3 = %tl - 1
2885 * 2885 *
2886 * On return: 2886 * On return:
2887 * 2887 *
2888 */ 2888 */
2889 2889
2890textfault: 2890textfault:
2891 wrpr %g0, PSTATE_KERN|PSTATE_AG, %pstate ! We need to save volatile stuff to AG regs 2891 wrpr %g0, PSTATE_KERN|PSTATE_AG, %pstate ! We need to save volatile stuff to AG regs
2892#ifdef TRAPS_USE_IG 2892#ifdef TRAPS_USE_IG
2893 wrpr %g0, PSTATE_KERN|PSTATE_IG, %pstate ! We need to save volatile stuff to AG regs 2893 wrpr %g0, PSTATE_KERN|PSTATE_IG, %pstate ! We need to save volatile stuff to AG regs
2894#endif 2894#endif
2895 wr %g0, ASI_IMMU, %asi 2895 wr %g0, ASI_IMMU, %asi
2896 ldxa [%g0 + TLB_TAG_ACCESS] %asi, %g1 ! Get fault address from tag access register 2896 ldxa [%g0 + TLB_TAG_ACCESS] %asi, %g1 ! Get fault address from tag access register
2897 ldxa [SFSR] %asi, %g3 ! get sync fault status register 2897 ldxa [SFSR] %asi, %g3 ! get sync fault status register
2898 membar #LoadStore 2898 membar #LoadStore
2899 stxa %g0, [SFSR] %asi ! Clear out old info 2899 stxa %g0, [SFSR] %asi ! Clear out old info
2900 2900
2901 TRAP_SETUP(-CC64FSZ-TF_SIZE) 2901 TRAP_SETUP(-CC64FSZ-TF_SIZE)
2902 INCR(_C_LABEL(uvmexp)+V_FAULTS) ! cnt.v_faults++ (clobbers %o0,%o1,%o2) 2902 INCR(_C_LABEL(uvmexp)+V_FAULTS) ! cnt.v_faults++ (clobbers %o0,%o1,%o2)
2903 2903
2904 mov %g3, %o3 2904 mov %g3, %o3
2905 2905
2906 wrpr %g0, PSTATE_KERN, %pstate ! Switch to normal globals 2906 wrpr %g0, PSTATE_KERN, %pstate ! Switch to normal globals
2907 ldxa [%g0] ASI_AFSR, %o4 ! get async fault status 2907 ldxa [%g0] ASI_AFSR, %o4 ! get async fault status
2908 ldxa [%g0] ASI_AFAR, %o5 ! get async fault address 2908 ldxa [%g0] ASI_AFAR, %o5 ! get async fault address
2909 mov -1, %o0 2909 mov -1, %o0
2910 stxa %o0, [%g0] ASI_AFSR ! Clear this out 2910 stxa %o0, [%g0] ASI_AFSR ! Clear this out
2911 stx %g1, [%sp + CC64FSZ + STKB + TF_G + (1*8)] ! save g1 2911 stx %g1, [%sp + CC64FSZ + STKB + TF_G + (1*8)] ! save g1
2912 stx %g2, [%sp + CC64FSZ + STKB + TF_G + (2*8)] ! save g2 2912 stx %g2, [%sp + CC64FSZ + STKB + TF_G + (2*8)] ! save g2
2913 stx %g3, [%sp + CC64FSZ + STKB + TF_G + (3*8)] ! (sneak g3 in here) 2913 stx %g3, [%sp + CC64FSZ + STKB + TF_G + (3*8)] ! (sneak g3 in here)
2914 rdpr %tt, %o1 ! Find out what caused this trap 2914 rdpr %tt, %o1 ! Find out what caused this trap
2915 stx %g4, [%sp + CC64FSZ + STKB + TF_G + (4*8)] ! sneak in g4 2915 stx %g4, [%sp + CC64FSZ + STKB + TF_G + (4*8)] ! sneak in g4
2916 rdpr %tstate, %g1 2916 rdpr %tstate, %g1
2917 stx %g5, [%sp + CC64FSZ + STKB + TF_G + (5*8)] ! sneak in g5 2917 stx %g5, [%sp + CC64FSZ + STKB + TF_G + (5*8)] ! sneak in g5
2918 rdpr %tpc, %o2 ! sync virt addr; must be read first 2918 rdpr %tpc, %o2 ! sync virt addr; must be read first
2919 stx %g6, [%sp + CC64FSZ + STKB + TF_G + (6*8)] ! sneak in g6 2919 stx %g6, [%sp + CC64FSZ + STKB + TF_G + (6*8)] ! sneak in g6
2920 rdpr %tnpc, %g3 2920 rdpr %tnpc, %g3
2921 stx %g7, [%sp + CC64FSZ + STKB + TF_G + (7*8)] ! sneak in g7 2921 stx %g7, [%sp + CC64FSZ + STKB + TF_G + (7*8)] ! sneak in g7
2922 rd %y, %g5 ! save y 2922 rd %y, %g5 ! save y
2923 2923
2924 /* Finish stackframe, call C trap handler */ 2924 /* Finish stackframe, call C trap handler */
2925 stx %g1, [%sp + CC64FSZ + STKB + TF_TSTATE] ! set tf.tf_psr, tf.tf_pc 2925 stx %g1, [%sp + CC64FSZ + STKB + TF_TSTATE] ! set tf.tf_psr, tf.tf_pc
2926 sth %o1, [%sp + CC64FSZ + STKB + TF_TT] ! debug 2926 sth %o1, [%sp + CC64FSZ + STKB + TF_TT] ! debug
2927 2927
2928 stx %o2, [%sp + CC64FSZ + STKB + TF_PC] 2928 stx %o2, [%sp + CC64FSZ + STKB + TF_PC]
2929 stx %g3, [%sp + CC64FSZ + STKB + TF_NPC] ! set tf.tf_npc 2929 stx %g3, [%sp + CC64FSZ + STKB + TF_NPC] ! set tf.tf_npc
2930 2930
2931 rdpr %pil, %g4 2931 rdpr %pil, %g4
2932 stb %g4, [%sp + CC64FSZ + STKB + TF_PIL] 2932 stb %g4, [%sp + CC64FSZ + STKB + TF_PIL]
2933 stb %g4, [%sp + CC64FSZ + STKB + TF_OLDPIL] 2933 stb %g4, [%sp + CC64FSZ + STKB + TF_OLDPIL]
2934 2934
2935 rdpr %tl, %g7 2935 rdpr %tl, %g7
2936 dec %g7 2936 dec %g7
2937 movrlz %g7, %g0, %g7 2937 movrlz %g7, %g0, %g7
2938 CHKPT(%g1,%g3,0x22) 2938 CHKPT(%g1,%g3,0x22)
2939 wrpr %g0, %g7, %tl ! Revert to kernel mode 2939 wrpr %g0, %g7, %tl ! Revert to kernel mode
2940 2940
2941 wr %g0, ASI_PRIMARY_NOFAULT, %asi ! Restore default ASI 2941 wr %g0, ASI_PRIMARY_NOFAULT, %asi ! Restore default ASI
2942 flushw ! Get rid of any user windows so we don't deadlock 2942 flushw ! Get rid of any user windows so we don't deadlock
2943  2943
2944 !! In the EMBEDANY memory model %g4 points to the start of the data segment. 2944 !! In the EMBEDANY memory model %g4 points to the start of the data segment.
2945 !! In our case we need to clear it before calling any C-code 2945 !! In our case we need to clear it before calling any C-code
2946 clr %g4 2946 clr %g4
2947 2947
2948 /* Use trap type to see what handler to call */ 2948 /* Use trap type to see what handler to call */
2949 cmp %o1, T_INST_ERROR 2949 cmp %o1, T_INST_ERROR
2950 be,pn %xcc, text_error 2950 be,pn %xcc, text_error
2951 st %g5, [%sp + CC64FSZ + STKB + TF_Y] ! set tf.tf_y 2951 st %g5, [%sp + CC64FSZ + STKB + TF_Y] ! set tf.tf_y
2952 2952
2953 wrpr %g0, PSTATE_INTR, %pstate ! reenable interrupts 2953 wrpr %g0, PSTATE_INTR, %pstate ! reenable interrupts
2954 call _C_LABEL(text_access_fault) ! mem_access_fault(&tf, type, pc, sfsr) 2954 call _C_LABEL(text_access_fault) ! mem_access_fault(&tf, type, pc, sfsr)
2955 add %sp, CC64FSZ + STKB, %o0 ! (argument: &tf) 2955 add %sp, CC64FSZ + STKB, %o0 ! (argument: &tf)
2956text_recover: 2956text_recover:
2957 CHKPT(%o1,%o2,2) 2957 CHKPT(%o1,%o2,2)
2958 wrpr %g0, PSTATE_KERN, %pstate ! disable interrupts 2958 wrpr %g0, PSTATE_KERN, %pstate ! disable interrupts
2959 b return_from_trap ! go return 2959 b return_from_trap ! go return
2960 ldx [%sp + CC64FSZ + STKB + TF_TSTATE], %g1 ! Load this for return_from_trap 2960 ldx [%sp + CC64FSZ + STKB + TF_TSTATE], %g1 ! Load this for return_from_trap
2961 NOTREACHED 2961 NOTREACHED
2962 2962
2963text_error: 2963text_error:
2964 wrpr %g0, PSTATE_INTR, %pstate ! reenable interrupts 2964 wrpr %g0, PSTATE_INTR, %pstate ! reenable interrupts
2965 call _C_LABEL(text_access_error) ! mem_access_fault(&tfm type, sfva [pc], sfsr, 2965 call _C_LABEL(text_access_error) ! mem_access_fault(&tfm type, sfva [pc], sfsr,
2966 ! afva, afsr); 2966 ! afva, afsr);
2967 add %sp, CC64FSZ + STKB, %o0 ! (argument: &tf) 2967 add %sp, CC64FSZ + STKB, %o0 ! (argument: &tf)
2968 ba text_recover 2968 ba text_recover
2969 nop 2969 nop
2970 NOTREACHED 2970 NOTREACHED
2971 2971
2972/* 2972/*
2973 * We're here because we took an alignment fault in NUCLEUS context. 2973 * We're here because we took an alignment fault in NUCLEUS context.
2974 * This could be a kernel bug or it could be due to saving a user 2974 * This could be a kernel bug or it could be due to saving a user
2975 * window to an invalid stack pointer.  2975 * window to an invalid stack pointer.
2976 *  2976 *
2977 * If the latter is the case, we could try to emulate unaligned accesses,  2977 * If the latter is the case, we could try to emulate unaligned accesses,
2978 * but we really don't know where to store the registers since we can't  2978 * but we really don't know where to store the registers since we can't
2979 * determine if there's a stack bias. Or we could store all the regs  2979 * determine if there's a stack bias. Or we could store all the regs
2980 * into the PCB and punt, until the user program uses up all the CPU's 2980 * into the PCB and punt, until the user program uses up all the CPU's
2981 * register windows and we run out of places to store them. So for 2981 * register windows and we run out of places to store them. So for
2982 * simplicity we'll just blow them away and enter the trap code which 2982 * simplicity we'll just blow them away and enter the trap code which
2983 * will generate a bus error. Debugging the problem will be a bit 2983 * will generate a bus error. Debugging the problem will be a bit
2984 * complicated since lots of register windows will be lost, but what 2984 * complicated since lots of register windows will be lost, but what
2985 * can we do? 2985 * can we do?
2986 */ 2986 */
2987checkalign: 2987checkalign:
2988 rdpr %tl, %g2 2988 rdpr %tl, %g2
2989 subcc %g2, 1, %g1 2989 subcc %g2, 1, %g1
2990 bneg,pn %icc, slowtrap ! Huh? 2990 bneg,pn %icc, slowtrap ! Huh?
2991 sethi %hi(CPCB), %g6 ! get current pcb 2991 sethi %hi(CPCB), %g6 ! get current pcb
2992 2992
2993 wrpr %g1, 0, %tl 2993 wrpr %g1, 0, %tl
2994 rdpr %tt, %g7 2994 rdpr %tt, %g7
2995 rdpr %tstate, %g4 2995 rdpr %tstate, %g4
2996 andn %g7, 0x3f, %g5 2996 andn %g7, 0x3f, %g5
2997 cmp %g5, 0x080 ! window spill traps are all 0b 0000 10xx xxxx 2997 cmp %g5, 0x080 ! window spill traps are all 0b 0000 10xx xxxx
2998 bne,a,pn %icc, slowtrap 2998 bne,a,pn %icc, slowtrap
2999 wrpr %g1, 0, %tl ! Revert TL XXX wrpr in a delay slot... 2999 wrpr %g1, 0, %tl ! Revert TL XXX wrpr in a delay slot...
3000 3000
3001#ifdef DEBUG 3001#ifdef DEBUG
3002 cmp %g7, 0x34 ! If we took a datafault just before this trap 3002 cmp %g7, 0x34 ! If we took a datafault just before this trap
3003 bne,pt %icc, checkalignspill ! our stack's probably bad so we need to switch somewhere else 3003 bne,pt %icc, checkalignspill ! our stack's probably bad so we need to switch somewhere else
3004 nop 3004 nop
3005 3005
3006 !! 3006 !!
3007 !! Double data fault -- bad stack? 3007 !! Double data fault -- bad stack?
3008 !! 3008 !!
3009 wrpr %g2, %tl ! Restore trap level. 3009 wrpr %g2, %tl ! Restore trap level.
3010 sir ! Just issue a reset and don't try to recover. 3010 sir ! Just issue a reset and don't try to recover.
3011 mov %fp, %l6 ! Save the frame pointer 3011 mov %fp, %l6 ! Save the frame pointer
3012 set EINTSTACK+USPACE+CC64FSZ-STKB, %fp ! Set the frame pointer to the middle of the idle stack 3012 set EINTSTACK+USPACE+CC64FSZ-STKB, %fp ! Set the frame pointer to the middle of the idle stack
3013 add %fp, -CC64FSZ, %sp ! Create a stackframe 3013 add %fp, -CC64FSZ, %sp ! Create a stackframe
3014 wrpr %g0, 15, %pil ! Disable interrupts, too 3014 wrpr %g0, 15, %pil ! Disable interrupts, too
3015 wrpr %g0, %g0, %canrestore ! Our stack is hozed and our PCB 3015 wrpr %g0, %g0, %canrestore ! Our stack is hozed and our PCB
3016 wrpr %g0, 7, %cansave ! probably is too, so blow away 3016 wrpr %g0, 7, %cansave ! probably is too, so blow away
3017 ba slowtrap ! all our register windows. 3017 ba slowtrap ! all our register windows.
3018 wrpr %g0, 0x101, %tt 3018 wrpr %g0, 0x101, %tt
3019#endif 3019#endif
3020checkalignspill: 3020checkalignspill:
3021 /* 3021 /*
3022 * %g1 -- current tl 3022 * %g1 -- current tl
3023 * %g2 -- original tl 3023 * %g2 -- original tl
3024 * %g4 -- tstate 3024 * %g4 -- tstate
3025 * %g7 -- tt 3025 * %g7 -- tt
3026 */ 3026 */
3027 3027
3028 and %g4, CWP, %g5 3028 and %g4, CWP, %g5
3029 wrpr %g5, %cwp ! Go back to the original register win 3029 wrpr %g5, %cwp ! Go back to the original register win
3030 3030
3031 /* 3031 /*
3032 * Remember: 3032 * Remember:
3033 *  3033 *
3034 * %otherwin = 0 3034 * %otherwin = 0
3035 * %cansave = NWINDOWS - 2 - %canrestore 3035 * %cansave = NWINDOWS - 2 - %canrestore
3036 */ 3036 */
3037 3037
3038 rdpr %otherwin, %g6 3038 rdpr %otherwin, %g6
3039 rdpr %canrestore, %g3 3039 rdpr %canrestore, %g3
3040 rdpr %ver, %g5 3040 rdpr %ver, %g5
3041 sub %g3, %g6, %g3 ! Calculate %canrestore - %g7 3041 sub %g3, %g6, %g3 ! Calculate %canrestore - %g7
3042 and %g5, CWP, %g5 ! NWINDOWS-1 3042 and %g5, CWP, %g5 ! NWINDOWS-1
3043 movrlz %g3, %g0, %g3 ! Clamp at zero 3043 movrlz %g3, %g0, %g3 ! Clamp at zero
3044 wrpr %g0, 0, %otherwin 3044 wrpr %g0, 0, %otherwin
3045 wrpr %g3, 0, %canrestore ! This is the new canrestore 3045 wrpr %g3, 0, %canrestore ! This is the new canrestore
3046 dec %g5 ! NWINDOWS-2 3046 dec %g5 ! NWINDOWS-2
3047 wrpr %g5, 0, %cleanwin ! Set cleanwin to max, since we're in-kernel 3047 wrpr %g5, 0, %cleanwin ! Set cleanwin to max, since we're in-kernel
3048 sub %g5, %g3, %g5 ! NWINDOWS-2-%canrestore 3048 sub %g5, %g3, %g5 ! NWINDOWS-2-%canrestore
3049 wrpr %g5, 0, %cansave 3049 wrpr %g5, 0, %cansave
3050 3050
3051 wrpr %g0, T_ALIGN, %tt ! This was an alignment fault  3051 wrpr %g0, T_ALIGN, %tt ! This was an alignment fault
3052 /* 3052 /*
3053 * Now we need to determine if this was a userland store or not. 3053 * Now we need to determine if this was a userland store or not.
3054 * Userland stores occur in anything other than the kernel spill 3054 * Userland stores occur in anything other than the kernel spill
3055 * handlers (trap type 09x). 3055 * handlers (trap type 09x).
3056 */ 3056 */
3057 and %g7, 0xff0, %g5 3057 and %g7, 0xff0, %g5
3058 cmp %g5, 0x90 3058 cmp %g5, 0x90
3059 bz,pn %icc, slowtrap 3059 bz,pn %icc, slowtrap
3060 nop 3060 nop
3061 bclr TSTATE_PRIV, %g4 3061 bclr TSTATE_PRIV, %g4
3062 wrpr %g4, 0, %tstate 3062 wrpr %g4, 0, %tstate
3063 ba,a,pt %icc, slowtrap 3063 ba,a,pt %icc, slowtrap
3064 nop 3064 nop
3065  3065
3066/* 3066/*
3067 * slowtrap() builds a trap frame and calls trap(). 3067 * slowtrap() builds a trap frame and calls trap().
3068 * This is called `slowtrap' because it *is*.... 3068 * This is called `slowtrap' because it *is*....
3069 * We have to build a full frame for ptrace(), for instance. 3069 * We have to build a full frame for ptrace(), for instance.
3070 * 3070 *
3071 * Registers: 3071 * Registers:
3072 * 3072 *
3073 */ 3073 */
3074slowtrap: 3074slowtrap:
3075#ifdef TRAPS_USE_IG 3075#ifdef TRAPS_USE_IG
3076 wrpr %g0, PSTATE_KERN|PSTATE_IG, %pstate ! DEBUG 3076 wrpr %g0, PSTATE_KERN|PSTATE_IG, %pstate ! DEBUG
3077#endif 3077#endif
3078#ifdef DIAGNOSTIC 3078#ifdef DIAGNOSTIC
3079 /* Make sure kernel stack is aligned */ 3079 /* Make sure kernel stack is aligned */
3080 btst 0x03, %sp ! 32-bit stack OK? 3080 btst 0x03, %sp ! 32-bit stack OK?
3081 and %sp, 0x07, %g4 ! 64-bit stack OK? 3081 and %sp, 0x07, %g4 ! 64-bit stack OK?
3082 bz,pt %icc, 1f 3082 bz,pt %icc, 1f
3083 cmp %g4, 0x1 ! Must end in 0b001 3083 cmp %g4, 0x1 ! Must end in 0b001
3084 be,pt %icc, 1f 3084 be,pt %icc, 1f
3085 rdpr %wstate, %g7 3085 rdpr %wstate, %g7
3086 cmp %g7, WSTATE_KERN 3086 cmp %g7, WSTATE_KERN
3087 bnz,pt %icc, 1f ! User stack -- we'll blow it away 3087 bnz,pt %icc, 1f ! User stack -- we'll blow it away
3088 nop 3088 nop
3089 sethi %hi(PANICSTACK), %sp 3089 sethi %hi(PANICSTACK), %sp
3090 LDPTR [%sp + %lo(PANICSTACK)], %sp 3090 LDPTR [%sp + %lo(PANICSTACK)], %sp
3091 add %sp, -CC64FSZ-STKB, %sp  3091 add %sp, -CC64FSZ-STKB, %sp
30921: 30921:
3093#endif 3093#endif
3094 rdpr %tt, %g4 3094 rdpr %tt, %g4
3095 rdpr %tstate, %g1 3095 rdpr %tstate, %g1
3096 rdpr %tpc, %g2 3096 rdpr %tpc, %g2
3097 rdpr %tnpc, %g3 3097 rdpr %tnpc, %g3
3098 3098
3099 TRAP_SETUP(-CC64FSZ-TF_SIZE) 3099 TRAP_SETUP(-CC64FSZ-TF_SIZE)
3100Lslowtrap_reenter: 3100Lslowtrap_reenter:
3101 stx %g1, [%sp + CC64FSZ + STKB + TF_TSTATE] 3101 stx %g1, [%sp + CC64FSZ + STKB + TF_TSTATE]
3102 mov %g4, %o1 ! (type) 3102 mov %g4, %o1 ! (type)
3103 stx %g2, [%sp + CC64FSZ + STKB + TF_PC] 3103 stx %g2, [%sp + CC64FSZ + STKB + TF_PC]
3104 rd %y, %g5 3104 rd %y, %g5
3105 stx %g3, [%sp + CC64FSZ + STKB + TF_NPC] 3105 stx %g3, [%sp + CC64FSZ + STKB + TF_NPC]
3106 mov %g1, %o3 ! (pstate) 3106 mov %g1, %o3 ! (pstate)
3107 st %g5, [%sp + CC64FSZ + STKB + TF_Y] 3107 st %g5, [%sp + CC64FSZ + STKB + TF_Y]
3108 mov %g2, %o2 ! (pc) 3108 mov %g2, %o2 ! (pc)
3109 sth %o1, [%sp + CC64FSZ + STKB + TF_TT]! debug 3109 sth %o1, [%sp + CC64FSZ + STKB + TF_TT]! debug
3110 3110
3111 wrpr %g0, PSTATE_KERN, %pstate ! Get back to normal globals 3111 wrpr %g0, PSTATE_KERN, %pstate ! Get back to normal globals
3112 stx %g1, [%sp + CC64FSZ + STKB + TF_G + (1*8)] 3112 stx %g1, [%sp + CC64FSZ + STKB + TF_G + (1*8)]
3113 stx %g2, [%sp + CC64FSZ + STKB + TF_G + (2*8)] 3113 stx %g2, [%sp + CC64FSZ + STKB + TF_G + (2*8)]
3114 add %sp, CC64FSZ + STKB, %o0 ! (&tf) 3114 add %sp, CC64FSZ + STKB, %o0 ! (&tf)
3115 stx %g3, [%sp + CC64FSZ + STKB + TF_G + (3*8)] 3115 stx %g3, [%sp + CC64FSZ + STKB + TF_G + (3*8)]
3116 stx %g4, [%sp + CC64FSZ + STKB + TF_G + (4*8)] 3116 stx %g4, [%sp + CC64FSZ + STKB + TF_G + (4*8)]
3117 stx %g5, [%sp + CC64FSZ + STKB + TF_G + (5*8)] 3117 stx %g5, [%sp + CC64FSZ + STKB + TF_G + (5*8)]
3118 rdpr %pil, %g5 3118 rdpr %pil, %g5
3119 stx %g6, [%sp + CC64FSZ + STKB + TF_G + (6*8)] 3119 stx %g6, [%sp + CC64FSZ + STKB + TF_G + (6*8)]
3120 stx %g7, [%sp + CC64FSZ + STKB + TF_G + (7*8)] 3120 stx %g7, [%sp + CC64FSZ + STKB + TF_G + (7*8)]
3121 stb %g5, [%sp + CC64FSZ + STKB + TF_PIL] 3121 stb %g5, [%sp + CC64FSZ + STKB + TF_PIL]
3122 stb %g5, [%sp + CC64FSZ + STKB + TF_OLDPIL] 3122 stb %g5, [%sp + CC64FSZ + STKB + TF_OLDPIL]
3123 /* 3123 /*
3124 * Phew, ready to enable traps and call C code. 3124 * Phew, ready to enable traps and call C code.
3125 */ 3125 */
3126 rdpr %tl, %g1 3126 rdpr %tl, %g1
3127 dec %g1 3127 dec %g1
3128 movrlz %g1, %g0, %g1 3128 movrlz %g1, %g0, %g1
3129 CHKPT(%g2,%g3,0x24) 3129 CHKPT(%g2,%g3,0x24)
3130 wrpr %g0, %g1, %tl ! Revert to kernel mode 3130 wrpr %g0, %g1, %tl ! Revert to kernel mode
3131 !! In the EMBEDANY memory model %g4 points to the start of the data segment. 3131 !! In the EMBEDANY memory model %g4 points to the start of the data segment.
3132 !! In our case we need to clear it before calling any C-code 3132 !! In our case we need to clear it before calling any C-code
3133 clr %g4 3133 clr %g4
3134 3134
3135 wr %g0, ASI_PRIMARY_NOFAULT, %asi ! Restore default ASI 3135 wr %g0, ASI_PRIMARY_NOFAULT, %asi ! Restore default ASI
3136 wrpr %g0, PSTATE_INTR, %pstate ! traps on again 3136 wrpr %g0, PSTATE_INTR, %pstate ! traps on again
3137 call _C_LABEL(trap) ! trap(tf, type, pc, pstate) 3137 call _C_LABEL(trap) ! trap(tf, type, pc, pstate)
3138 nop 3138 nop
3139 3139
3140 CHKPT(%o1,%o2,3) 3140 CHKPT(%o1,%o2,3)
3141 ba,a,pt %icc, return_from_trap 3141 ba,a,pt %icc, return_from_trap
3142 nop 3142 nop
3143 NOTREACHED 3143 NOTREACHED
3144#if 1 3144#if 1
3145/* 3145/*
3146 * This code is no longer needed. 3146 * This code is no longer needed.
3147 */ 3147 */
3148/* 3148/*
3149 * Do a `software' trap by re-entering the trap code, possibly first 3149 * Do a `software' trap by re-entering the trap code, possibly first
3150 * switching from interrupt stack to kernel stack. This is used for 3150 * switching from interrupt stack to kernel stack. This is used for
3151 * scheduling and signal ASTs (which generally occur from softclock or 3151 * scheduling and signal ASTs (which generally occur from softclock or
3152 * tty or net interrupts). 3152 * tty or net interrupts).
3153 * 3153 *
3154 * We enter with the trap type in %g1. All we have to do is jump to 3154 * We enter with the trap type in %g1. All we have to do is jump to
3155 * Lslowtrap_reenter above, but maybe after switching stacks.... 3155 * Lslowtrap_reenter above, but maybe after switching stacks....
3156 * 3156 *
3157 * We should be running alternate globals. The normal globals and 3157 * We should be running alternate globals. The normal globals and
3158 * out registers were just loaded from the old trap frame. 3158 * out registers were just loaded from the old trap frame.
3159 * 3159 *
3160 * Input Params: 3160 * Input Params:
3161 * %g1 = tstate 3161 * %g1 = tstate
3162 * %g2 = tpc 3162 * %g2 = tpc
3163 * %g3 = tnpc 3163 * %g3 = tnpc
3164 * %g4 = tt == T_AST 3164 * %g4 = tt == T_AST
3165 */ 3165 */
3166softtrap: 3166softtrap:
3167 sethi %hi(EINTSTACK-STKB), %g5 3167 sethi %hi(EINTSTACK-STKB), %g5
3168 sethi %hi(EINTSTACK-INTSTACK), %g7 3168 sethi %hi(EINTSTACK-INTSTACK), %g7
3169 or %g5, %lo(EINTSTACK-STKB), %g5 3169 or %g5, %lo(EINTSTACK-STKB), %g5
3170 dec %g7 3170 dec %g7
3171 sub %g5, %sp, %g5 3171 sub %g5, %sp, %g5
3172 sethi %hi(CPCB), %g6 3172 sethi %hi(CPCB), %g6
3173 andncc %g5, %g7, %g0 3173 andncc %g5, %g7, %g0
3174 bnz,pt %xcc, Lslowtrap_reenter 3174 bnz,pt %xcc, Lslowtrap_reenter
3175 LDPTR [%g6 + %lo(CPCB)], %g7 3175 LDPTR [%g6 + %lo(CPCB)], %g7
3176 set USPACE-CC64FSZ-TF_SIZE-STKB, %g5 3176 set USPACE-CC64FSZ-TF_SIZE-STKB, %g5
3177 add %g7, %g5, %g6 3177 add %g7, %g5, %g6
3178 SET_SP_REDZONE(%g7, %g5) 3178 SET_SP_REDZONE(%g7, %g5)
3179#ifdef DEBUG 3179#ifdef DEBUG
3180 stx %g1, [%g6 + CC64FSZ + STKB + TF_FAULT] ! Generate a new trapframe 3180 stx %g1, [%g6 + CC64FSZ + STKB + TF_FAULT] ! Generate a new trapframe
3181#endif 3181#endif
3182 stx %i0, [%g6 + CC64FSZ + STKB + TF_O + (0*8)] ! but don't bother with 3182 stx %i0, [%g6 + CC64FSZ + STKB + TF_O + (0*8)] ! but don't bother with
3183 stx %i1, [%g6 + CC64FSZ + STKB + TF_O + (1*8)] ! locals and ins 3183 stx %i1, [%g6 + CC64FSZ + STKB + TF_O + (1*8)] ! locals and ins
3184 stx %i2, [%g6 + CC64FSZ + STKB + TF_O + (2*8)] 3184 stx %i2, [%g6 + CC64FSZ + STKB + TF_O + (2*8)]
3185 stx %i3, [%g6 + CC64FSZ + STKB + TF_O + (3*8)] 3185 stx %i3, [%g6 + CC64FSZ + STKB + TF_O + (3*8)]
3186 stx %i4, [%g6 + CC64FSZ + STKB + TF_O + (4*8)] 3186 stx %i4, [%g6 + CC64FSZ + STKB + TF_O + (4*8)]
3187 stx %i5, [%g6 + CC64FSZ + STKB + TF_O + (5*8)] 3187 stx %i5, [%g6 + CC64FSZ + STKB + TF_O + (5*8)]
3188 stx %i6, [%g6 + CC64FSZ + STKB + TF_O + (6*8)] 3188 stx %i6, [%g6 + CC64FSZ + STKB + TF_O + (6*8)]
3189 stx %i7, [%g6 + CC64FSZ + STKB + TF_O + (7*8)] 3189 stx %i7, [%g6 + CC64FSZ + STKB + TF_O + (7*8)]
3190#ifdef DEBUG 3190#ifdef DEBUG
3191 ldx [%sp + CC64FSZ + STKB + TF_I + (0*8)], %l0 ! Copy over the rest of the regs 3191 ldx [%sp + CC64FSZ + STKB + TF_I + (0*8)], %l0 ! Copy over the rest of the regs
3192 ldx [%sp + CC64FSZ + STKB + TF_I + (1*8)], %l1 ! But just dirty the locals 3192 ldx [%sp + CC64FSZ + STKB + TF_I + (1*8)], %l1 ! But just dirty the locals
3193 ldx [%sp + CC64FSZ + STKB + TF_I + (2*8)], %l2 3193 ldx [%sp + CC64FSZ + STKB + TF_I + (2*8)], %l2
3194 ldx [%sp + CC64FSZ + STKB + TF_I + (3*8)], %l3 3194 ldx [%sp + CC64FSZ + STKB + TF_I + (3*8)], %l3
3195 ldx [%sp + CC64FSZ + STKB + TF_I + (4*8)], %l4 3195 ldx [%sp + CC64FSZ + STKB + TF_I + (4*8)], %l4
3196 ldx [%sp + CC64FSZ + STKB + TF_I + (5*8)], %l5 3196 ldx [%sp + CC64FSZ + STKB + TF_I + (5*8)], %l5
3197 ldx [%sp + CC64FSZ + STKB + TF_I + (6*8)], %l6 3197 ldx [%sp + CC64FSZ + STKB + TF_I + (6*8)], %l6
3198 ldx [%sp + CC64FSZ + STKB + TF_I + (7*8)], %l7 3198 ldx [%sp + CC64FSZ + STKB + TF_I + (7*8)], %l7
3199 stx %l0, [%g6 + CC64FSZ + STKB + TF_I + (0*8)] 3199 stx %l0, [%g6 + CC64FSZ + STKB + TF_I + (0*8)]
3200 stx %l1, [%g6 + CC64FSZ + STKB + TF_I + (1*8)] 3200 stx %l1, [%g6 + CC64FSZ + STKB + TF_I + (1*8)]
3201 stx %l2, [%g6 + CC64FSZ + STKB + TF_I + (2*8)] 3201 stx %l2, [%g6 + CC64FSZ + STKB + TF_I + (2*8)]
3202 stx %l3, [%g6 + CC64FSZ + STKB + TF_I + (3*8)] 3202 stx %l3, [%g6 + CC64FSZ + STKB + TF_I + (3*8)]
3203 stx %l4, [%g6 + CC64FSZ + STKB + TF_I + (4*8)] 3203 stx %l4, [%g6 + CC64FSZ + STKB + TF_I + (4*8)]
3204 stx %l5, [%g6 + CC64FSZ + STKB + TF_I + (5*8)] 3204 stx %l5, [%g6 + CC64FSZ + STKB + TF_I + (5*8)]
3205 stx %l6, [%g6 + CC64FSZ + STKB + TF_I + (6*8)] 3205 stx %l6, [%g6 + CC64FSZ + STKB + TF_I + (6*8)]
3206 stx %l7, [%g6 + CC64FSZ + STKB + TF_I + (7*8)] 3206 stx %l7, [%g6 + CC64FSZ + STKB + TF_I + (7*8)]
3207 ldx [%sp + CC64FSZ + STKB + TF_L + (0*8)], %l0 3207 ldx [%sp + CC64FSZ + STKB + TF_L + (0*8)], %l0
3208 ldx [%sp + CC64FSZ + STKB + TF_L + (1*8)], %l1 3208 ldx [%sp + CC64FSZ + STKB + TF_L + (1*8)], %l1
3209 ldx [%sp + CC64FSZ + STKB + TF_L + (2*8)], %l2 3209 ldx [%sp + CC64FSZ + STKB + TF_L + (2*8)], %l2
3210 ldx [%sp + CC64FSZ + STKB + TF_L + (3*8)], %l3 3210 ldx [%sp + CC64FSZ + STKB + TF_L + (3*8)], %l3
3211 ldx [%sp + CC64FSZ + STKB + TF_L + (4*8)], %l4 3211 ldx [%sp + CC64FSZ + STKB + TF_L + (4*8)], %l4
3212 ldx [%sp + CC64FSZ + STKB + TF_L + (5*8)], %l5 3212 ldx [%sp + CC64FSZ + STKB + TF_L + (5*8)], %l5
3213 ldx [%sp + CC64FSZ + STKB + TF_L + (6*8)], %l6 3213 ldx [%sp + CC64FSZ + STKB + TF_L + (6*8)], %l6
3214 ldx [%sp + CC64FSZ + STKB + TF_L + (7*8)], %l7 3214 ldx [%sp + CC64FSZ + STKB + TF_L + (7*8)], %l7
3215 stx %l0, [%g6 + CC64FSZ + STKB + TF_L + (0*8)] 3215 stx %l0, [%g6 + CC64FSZ + STKB + TF_L + (0*8)]
3216 stx %l1, [%g6 + CC64FSZ + STKB + TF_L + (1*8)] 3216 stx %l1, [%g6 + CC64FSZ + STKB + TF_L + (1*8)]
3217 stx %l2, [%g6 + CC64FSZ + STKB + TF_L + (2*8)] 3217 stx %l2, [%g6 + CC64FSZ + STKB + TF_L + (2*8)]
3218 stx %l3, [%g6 + CC64FSZ + STKB + TF_L + (3*8)] 3218 stx %l3, [%g6 + CC64FSZ + STKB + TF_L + (3*8)]
3219 stx %l4, [%g6 + CC64FSZ + STKB + TF_L + (4*8)] 3219 stx %l4, [%g6 + CC64FSZ + STKB + TF_L + (4*8)]
3220 stx %l5, [%g6 + CC64FSZ + STKB + TF_L + (5*8)] 3220 stx %l5, [%g6 + CC64FSZ + STKB + TF_L + (5*8)]
3221 stx %l6, [%g6 + CC64FSZ + STKB + TF_L + (6*8)] 3221 stx %l6, [%g6 + CC64FSZ + STKB + TF_L + (6*8)]
3222 stx %l7, [%g6 + CC64FSZ + STKB + TF_L + (7*8)] 3222 stx %l7, [%g6 + CC64FSZ + STKB + TF_L + (7*8)]
3223#endif 3223#endif
3224 ba,pt %xcc, Lslowtrap_reenter 3224 ba,pt %xcc, Lslowtrap_reenter
3225 mov %g6, %sp 3225 mov %g6, %sp
3226#endif 3226#endif
3227 3227
3228#if 0 3228#if 0
3229/* 3229/*
3230 * breakpoint: capture as much info as possible and then call DDB 3230 * breakpoint: capture as much info as possible and then call DDB
3231 * or trap, as the case may be. 3231 * or trap, as the case may be.
3232 * 3232 *
3233 * First, we switch to interrupt globals, and blow away %g7. Then 3233 * First, we switch to interrupt globals, and blow away %g7. Then
3234 * switch down one stackframe -- just fiddle w/cwp, don't save or 3234 * switch down one stackframe -- just fiddle w/cwp, don't save or
3235 * we'll trap. Then slowly save all the globals into our static 3235 * we'll trap. Then slowly save all the globals into our static
3236 * register buffer. etc. etc. 3236 * register buffer. etc. etc.
3237 */ 3237 */
3238 3238
3239breakpoint: 3239breakpoint:
3240 wrpr %g0, PSTATE_KERN|PSTATE_IG, %pstate ! Get IG to use 3240 wrpr %g0, PSTATE_KERN|PSTATE_IG, %pstate ! Get IG to use
3241 rdpr %cwp, %g7 3241 rdpr %cwp, %g7
3242 inc 1, %g7 ! Equivalent of save 3242 inc 1, %g7 ! Equivalent of save
3243 wrpr %g7, 0, %cwp ! Now we have some unused locals to fiddle with 3243 wrpr %g7, 0, %cwp ! Now we have some unused locals to fiddle with
3244XXX ddb_regs is now ddb-regp and is a pointer not a symbol. 3244XXX ddb_regs is now ddb-regp and is a pointer not a symbol.
3245 set _C_LABEL(ddb_regs), %l0 3245 set _C_LABEL(ddb_regs), %l0
3246 stx %g1, [%l0+DBR_IG+(1*8)] ! Save IGs 3246 stx %g1, [%l0+DBR_IG+(1*8)] ! Save IGs
3247 stx %g2, [%l0+DBR_IG+(2*8)] 3247 stx %g2, [%l0+DBR_IG+(2*8)]
3248 stx %g3, [%l0+DBR_IG+(3*8)] 3248 stx %g3, [%l0+DBR_IG+(3*8)]
3249 stx %g4, [%l0+DBR_IG+(4*8)] 3249 stx %g4, [%l0+DBR_IG+(4*8)]
3250 stx %g5, [%l0+DBR_IG+(5*8)] 3250 stx %g5, [%l0+DBR_IG+(5*8)]
3251 stx %g6, [%l0+DBR_IG+(6*8)] 3251 stx %g6, [%l0+DBR_IG+(6*8)]
3252 stx %g7, [%l0+DBR_IG+(7*8)] 3252 stx %g7, [%l0+DBR_IG+(7*8)]
3253 wrpr %g0, PSTATE_KERN|PSTATE_MG, %pstate ! Get MG to use 3253 wrpr %g0, PSTATE_KERN|PSTATE_MG, %pstate ! Get MG to use
3254 stx %g1, [%l0+DBR_MG+(1*8)] ! Save MGs 3254 stx %g1, [%l0+DBR_MG+(1*8)] ! Save MGs
3255 stx %g2, [%l0+DBR_MG+(2*8)] 3255 stx %g2, [%l0+DBR_MG+(2*8)]
3256 stx %g3, [%l0+DBR_MG+(3*8)] 3256 stx %g3, [%l0+DBR_MG+(3*8)]
3257 stx %g4, [%l0+DBR_MG+(4*8)] 3257 stx %g4, [%l0+DBR_MG+(4*8)]
3258 stx %g5, [%l0+DBR_MG+(5*8)] 3258 stx %g5, [%l0+DBR_MG+(5*8)]
3259 stx %g6, [%l0+DBR_MG+(6*8)] 3259 stx %g6, [%l0+DBR_MG+(6*8)]
3260 stx %g7, [%l0+DBR_MG+(7*8)] 3260 stx %g7, [%l0+DBR_MG+(7*8)]
3261 wrpr %g0, PSTATE_KERN|PSTATE_AG, %pstate ! Get AG to use 3261 wrpr %g0, PSTATE_KERN|PSTATE_AG, %pstate ! Get AG to use
3262 stx %g1, [%l0+DBR_AG+(1*8)] ! Save AGs 3262 stx %g1, [%l0+DBR_AG+(1*8)] ! Save AGs
3263 stx %g2, [%l0+DBR_AG+(2*8)] 3263 stx %g2, [%l0+DBR_AG+(2*8)]
3264 stx %g3, [%l0+DBR_AG+(3*8)] 3264 stx %g3, [%l0+DBR_AG+(3*8)]
3265 stx %g4, [%l0+DBR_AG+(4*8)] 3265 stx %g4, [%l0+DBR_AG+(4*8)]
3266 stx %g5, [%l0+DBR_AG+(5*8)] 3266 stx %g5, [%l0+DBR_AG+(5*8)]
3267 stx %g6, [%l0+DBR_AG+(6*8)] 3267 stx %g6, [%l0+DBR_AG+(6*8)]
3268 stx %g7, [%l0+DBR_AG+(7*8)] 3268 stx %g7, [%l0+DBR_AG+(7*8)]
3269 wrpr %g0, PSTATE_KERN, %pstate ! Get G to use 3269 wrpr %g0, PSTATE_KERN, %pstate ! Get G to use
3270 stx %g1, [%l0+DBR_G+(1*8)] ! Save Gs 3270 stx %g1, [%l0+DBR_G+(1*8)] ! Save Gs
3271 stx %g2, [%l0+DBR_G+(2*8)] 3271 stx %g2, [%l0+DBR_G+(2*8)]
3272 stx %g3, [%l0+DBR_G+(3*8)] 3272 stx %g3, [%l0+DBR_G+(3*8)]
3273 stx %g4, [%l0+DBR_G+(4*8)] 3273 stx %g4, [%l0+DBR_G+(4*8)]
3274 stx %g5, [%l0+DBR_G+(5*8)] 3274 stx %g5, [%l0+DBR_G+(5*8)]
3275 stx %g6, [%l0+DBR_G+(6*8)] 3275 stx %g6, [%l0+DBR_G+(6*8)]
3276 stx %g7, [%l0+DBR_G+(7*8)] 3276 stx %g7, [%l0+DBR_G+(7*8)]
3277 rdpr %canrestore, %l1 3277 rdpr %canrestore, %l1
3278 stb %l1, [%l0+DBR_CANRESTORE] 3278 stb %l1, [%l0+DBR_CANRESTORE]
3279 rdpr %cansave, %l2 3279 rdpr %cansave, %l2
3280 stb %l2, [%l0+DBR_CANSAVE] 3280 stb %l2, [%l0+DBR_CANSAVE]
3281 rdpr %cleanwin, %l3 3281 rdpr %cleanwin, %l3
3282 stb %l3, [%l0+DBR_CLEANWIN] 3282 stb %l3, [%l0+DBR_CLEANWIN]
3283 rdpr %wstate, %l4 3283 rdpr %wstate, %l4
3284 stb %l4, [%l0+DBR_WSTATE] 3284 stb %l4, [%l0+DBR_WSTATE]
3285 rd %y, %l5 3285 rd %y, %l5
3286 stw %l5, [%l0+DBR_Y] 3286 stw %l5, [%l0+DBR_Y]
3287 rdpr %tl, %l6 3287 rdpr %tl, %l6
3288 stb %l6, [%l0+DBR_TL] 3288 stb %l6, [%l0+DBR_TL]
3289 dec 1, %g7 3289 dec 1, %g7
3290#endif 3290#endif
3291 3291
3292/* 3292/*
3293 * I will not touch any of the DDB or KGDB stuff until I know what's going 3293 * I will not touch any of the DDB or KGDB stuff until I know what's going
3294 * on with the symbol table. This is all still v7/v8 code and needs to be fixed. 3294 * on with the symbol table. This is all still v7/v8 code and needs to be fixed.
3295 */ 3295 */
3296#ifdef KGDB 3296#ifdef KGDB
3297/* 3297/*
3298 * bpt is entered on all breakpoint traps. 3298 * bpt is entered on all breakpoint traps.
3299 * If this is a kernel breakpoint, we do not want to call trap(). 3299 * If this is a kernel breakpoint, we do not want to call trap().
3300 * Among other reasons, this way we can set breakpoints in trap(). 3300 * Among other reasons, this way we can set breakpoints in trap().
3301 */ 3301 */
3302bpt: 3302bpt:
3303 set TSTATE_PRIV, %l4 3303 set TSTATE_PRIV, %l4
3304 andcc %l4, %l0, %g0 ! breakpoint from kernel? 3304 andcc %l4, %l0, %g0 ! breakpoint from kernel?
3305 bz slowtrap ! no, go do regular trap 3305 bz slowtrap ! no, go do regular trap
3306 nop 3306 nop
3307 3307
3308 /* 3308 /*
3309 * Build a trap frame for kgdb_trap_glue to copy. 3309 * Build a trap frame for kgdb_trap_glue to copy.
3310 * Enable traps but set ipl high so that we will not 3310 * Enable traps but set ipl high so that we will not
3311 * see interrupts from within breakpoints. 3311 * see interrupts from within breakpoints.
3312 */ 3312 */
3313 save %sp, -CCFSZ-TF_SIZE, %sp ! allocate a trap frame 3313 save %sp, -CCFSZ-TF_SIZE, %sp ! allocate a trap frame
3314 TRAP_SETUP(-CCFSZ-TF_SIZE) 3314 TRAP_SETUP(-CCFSZ-TF_SIZE)
3315 or %l0, PSR_PIL, %l4 ! splhigh() 3315 or %l0, PSR_PIL, %l4 ! splhigh()
3316 wr %l4, 0, %psr ! the manual claims that this 3316 wr %l4, 0, %psr ! the manual claims that this
3317 wr %l4, PSR_ET, %psr ! song and dance is necessary 3317 wr %l4, PSR_ET, %psr ! song and dance is necessary
3318 std %l0, [%sp + CCFSZ + 0] ! tf.tf_psr, tf.tf_pc 3318 std %l0, [%sp + CCFSZ + 0] ! tf.tf_psr, tf.tf_pc
3319 mov %l3, %o0 ! trap type arg for kgdb_trap_glue 3319 mov %l3, %o0 ! trap type arg for kgdb_trap_glue
3320 rd %y, %l3 3320 rd %y, %l3
3321 std %l2, [%sp + CCFSZ + 8] ! tf.tf_npc, tf.tf_y 3321 std %l2, [%sp + CCFSZ + 8] ! tf.tf_npc, tf.tf_y
3322 rd %wim, %l3 3322 rd %wim, %l3
3323 st %l3, [%sp + CCFSZ + 16] ! tf.tf_wim (a kgdb-only r/o field) 3323 st %l3, [%sp + CCFSZ + 16] ! tf.tf_wim (a kgdb-only r/o field)
3324 st %g1, [%sp + CCFSZ + 20] ! tf.tf_global[1] 3324 st %g1, [%sp + CCFSZ + 20] ! tf.tf_global[1]
3325 std %g2, [%sp + CCFSZ + 24] ! etc 3325 std %g2, [%sp + CCFSZ + 24] ! etc
3326 std %g4, [%sp + CCFSZ + 32] 3326 std %g4, [%sp + CCFSZ + 32]
3327 std %g6, [%sp + CCFSZ + 40] 3327 std %g6, [%sp + CCFSZ + 40]
3328 std %i0, [%sp + CCFSZ + 48] ! tf.tf_in[0..1] 3328 std %i0, [%sp + CCFSZ + 48] ! tf.tf_in[0..1]
3329 std %i2, [%sp + CCFSZ + 56] ! etc 3329 std %i2, [%sp + CCFSZ + 56] ! etc
3330 std %i4, [%sp + CCFSZ + 64] 3330 std %i4, [%sp + CCFSZ + 64]
3331 std %i6, [%sp + CCFSZ + 72] 3331 std %i6, [%sp + CCFSZ + 72]
3332 3332
3333 /* 3333 /*
3334 * Now call kgdb_trap_glue(); if it returns, call trap(). 3334 * Now call kgdb_trap_glue(); if it returns, call trap().
3335 */ 3335 */
3336 mov %o0, %l3 ! gotta save trap type 3336 mov %o0, %l3 ! gotta save trap type
3337 call _C_LABEL(kgdb_trap_glue) ! kgdb_trap_glue(type, &trapframe) 3337 call _C_LABEL(kgdb_trap_glue) ! kgdb_trap_glue(type, &trapframe)
3338 add %sp, CCFSZ, %o1 ! (&trapframe) 3338 add %sp, CCFSZ, %o1 ! (&trapframe)
3339 3339
3340 /* 3340 /*
3341 * Use slowtrap to call trap---but first erase our tracks 3341 * Use slowtrap to call trap---but first erase our tracks
3342 * (put the registers back the way they were). 3342 * (put the registers back the way they were).
3343 */ 3343 */
3344 mov %l3, %o0 ! slowtrap will need trap type 3344 mov %l3, %o0 ! slowtrap will need trap type
3345 ld [%sp + CCFSZ + 12], %l3 3345 ld [%sp + CCFSZ + 12], %l3
3346 wr %l3, 0, %y 3346 wr %l3, 0, %y
3347 ld [%sp + CCFSZ + 20], %g1 3347 ld [%sp + CCFSZ + 20], %g1
3348 ldd [%sp + CCFSZ + 24], %g2 3348 ldd [%sp + CCFSZ + 24], %g2
3349 ldd [%sp + CCFSZ + 32], %g4 3349 ldd [%sp + CCFSZ + 32], %g4
3350 b Lslowtrap_reenter 3350 b Lslowtrap_reenter
3351 ldd [%sp + CCFSZ + 40], %g6 3351 ldd [%sp + CCFSZ + 40], %g6
3352 3352
3353/* 3353/*
3354 * Enter kernel breakpoint. Write all the windows (not including the 3354 * Enter kernel breakpoint. Write all the windows (not including the
3355 * current window) into the stack, so that backtrace works. Copy the 3355 * current window) into the stack, so that backtrace works. Copy the
3356 * supplied trap frame to the kgdb stack and switch stacks. 3356 * supplied trap frame to the kgdb stack and switch stacks.
3357 * 3357 *
3358 * kgdb_trap_glue(type, tf0) 3358 * kgdb_trap_glue(type, tf0)
3359 * int type; 3359 * int type;
3360 * struct trapframe *tf0; 3360 * struct trapframe *tf0;
3361 */ 3361 */
3362ENTRY_NOPROFILE(kgdb_trap_glue) 3362ENTRY_NOPROFILE(kgdb_trap_glue)
3363 save %sp, -CCFSZ, %sp 3363 save %sp, -CCFSZ, %sp
3364 3364
3365 flushw ! flush all windows 3365 flushw ! flush all windows
3366 mov %sp, %l4 ! %l4 = current %sp 3366 mov %sp, %l4 ! %l4 = current %sp
3367 3367
3368 /* copy trapframe to top of kgdb stack */ 3368 /* copy trapframe to top of kgdb stack */
3369 set _C_LABEL(kgdb_stack) + KGDB_STACK_SIZE - 80, %l0 3369 set _C_LABEL(kgdb_stack) + KGDB_STACK_SIZE - 80, %l0
3370 ! %l0 = tfcopy -> end_of_kgdb_stack 3370 ! %l0 = tfcopy -> end_of_kgdb_stack
3371 mov 80, %l1 3371 mov 80, %l1
33721: ldd [%i1], %l2 33721: ldd [%i1], %l2
3373 inc 8, %i1 3373 inc 8, %i1
3374 deccc 8, %l1 3374 deccc 8, %l1
3375 std %l2, [%l0] 3375 std %l2, [%l0]
3376 bg 1b 3376 bg 1b
3377 inc 8, %l0 3377 inc 8, %l0
3378 3378
3379#ifdef NOTDEF_DEBUG 3379#ifdef NOTDEF_DEBUG
3380 /* save old red zone and then turn it off */ 3380 /* save old red zone and then turn it off */
3381 sethi %hi(_C_LABEL(redzone)), %l7 3381 sethi %hi(_C_LABEL(redzone)), %l7
3382 ld [%l7 + %lo(_C_LABEL(redzone))], %l6 3382 ld [%l7 + %lo(_C_LABEL(redzone))], %l6
3383 st %g0, [%l7 + %lo(_C_LABEL(redzone))] 3383 st %g0, [%l7 + %lo(_C_LABEL(redzone))]
3384#endif 3384#endif
3385 /* switch to kgdb stack */ 3385 /* switch to kgdb stack */
3386 add %l0, -CCFSZ-TF_SIZE, %sp 3386 add %l0, -CCFSZ-TF_SIZE, %sp
3387 3387
3388 /* if (kgdb_trap(type, tfcopy)) kgdb_rett(tfcopy); */ 3388 /* if (kgdb_trap(type, tfcopy)) kgdb_rett(tfcopy); */
3389 mov %i0, %o0 3389 mov %i0, %o0
3390 call _C_LABEL(kgdb_trap) 3390 call _C_LABEL(kgdb_trap)
3391 add %l0, -80, %o1 3391 add %l0, -80, %o1
3392 tst %o0 3392 tst %o0
3393 bnz,a kgdb_rett 3393 bnz,a kgdb_rett
3394 add %l0, -80, %g1 3394 add %l0, -80, %g1
3395 3395
3396 /* 3396 /*
3397 * kgdb_trap() did not handle the trap at all so the stack is 3397 * kgdb_trap() did not handle the trap at all so the stack is
3398 * still intact. A simple `restore' will put everything back, 3398 * still intact. A simple `restore' will put everything back,
3399 * after we reset the stack pointer. 3399 * after we reset the stack pointer.
3400 */ 3400 */
3401 mov %l4, %sp 3401 mov %l4, %sp
3402#ifdef NOTDEF_DEBUG 3402#ifdef NOTDEF_DEBUG
3403 st %l6, [%l7 + %lo(_C_LABEL(redzone))] ! restore red zone 3403 st %l6, [%l7 + %lo(_C_LABEL(redzone))] ! restore red zone
3404#endif 3404#endif
3405 ret 3405 ret
3406 restore 3406 restore
3407 3407
3408/* 3408/*
3409 * Return from kgdb trap. This is sort of special. 3409 * Return from kgdb trap. This is sort of special.
3410 * 3410 *
3411 * We know that kgdb_trap_glue wrote the window above it, so that we will 3411 * We know that kgdb_trap_glue wrote the window above it, so that we will
3412 * be able to (and are sure to have to) load it up. We also know that we 3412 * be able to (and are sure to have to) load it up. We also know that we
3413 * came from kernel land and can assume that the %fp (%i6) we load here 3413 * came from kernel land and can assume that the %fp (%i6) we load here
3414 * is proper. We must also be sure not to lower ipl (it is at splhigh()) 3414 * is proper. We must also be sure not to lower ipl (it is at splhigh())
3415 * until we have traps disabled, due to the SPARC taking traps at the 3415 * until we have traps disabled, due to the SPARC taking traps at the
3416 * new ipl before noticing that PSR_ET has been turned off. We are on 3416 * new ipl before noticing that PSR_ET has been turned off. We are on
3417 * the kgdb stack, so this could be disastrous. 3417 * the kgdb stack, so this could be disastrous.
3418 * 3418 *
3419 * Note that the trapframe argument in %g1 points into the current stack 3419 * Note that the trapframe argument in %g1 points into the current stack
3420 * frame (current window). We abandon this window when we move %g1->tf_psr 3420 * frame (current window). We abandon this window when we move %g1->tf_psr
3421 * into %psr, but we will not have loaded the new %sp yet, so again traps 3421 * into %psr, but we will not have loaded the new %sp yet, so again traps
3422 * must be disabled. 3422 * must be disabled.
3423 */ 3423 */
3424kgdb_rett: 3424kgdb_rett:
3425 rd %psr, %g4 ! turn off traps 3425 rd %psr, %g4 ! turn off traps
3426 wr %g4, PSR_ET, %psr 3426 wr %g4, PSR_ET, %psr
3427 /* use the three-instruction delay to do something useful */ 3427 /* use the three-instruction delay to do something useful */
3428 ld [%g1], %g2 ! pick up new %psr 3428 ld [%g1], %g2 ! pick up new %psr
3429 ld [%g1 + 12], %g3 ! set %y 3429 ld [%g1 + 12], %g3 ! set %y
3430 wr %g3, 0, %y 3430 wr %g3, 0, %y
3431#ifdef NOTDEF_DEBUG 3431#ifdef NOTDEF_DEBUG
3432 st %l6, [%l7 + %lo(_C_LABEL(redzone))] ! and restore red zone 3432 st %l6, [%l7 + %lo(_C_LABEL(redzone))] ! and restore red zone
3433#endif 3433#endif
3434 wr %g0, 0, %wim ! enable window changes 3434 wr %g0, 0, %wim ! enable window changes
3435 nop; nop; nop 3435 nop; nop; nop
3436 /* now safe to set the new psr (changes CWP, leaves traps disabled) */ 3436 /* now safe to set the new psr (changes CWP, leaves traps disabled) */
3437 wr %g2, 0, %psr ! set rett psr (including cond codes) 3437 wr %g2, 0, %psr ! set rett psr (including cond codes)
3438 /* 3 instruction delay before we can use the new window */ 3438 /* 3 instruction delay before we can use the new window */
3439/*1*/ ldd [%g1 + 24], %g2 ! set new %g2, %g3 3439/*1*/ ldd [%g1 + 24], %g2 ! set new %g2, %g3
3440/*2*/ ldd [%g1 + 32], %g4 ! set new %g4, %g5 3440/*2*/ ldd [%g1 + 32], %g4 ! set new %g4, %g5
3441/*3*/ ldd [%g1 + 40], %g6 ! set new %g6, %g7 3441/*3*/ ldd [%g1 + 40], %g6 ! set new %g6, %g7
3442 3442
3443 /* now we can use the new window */ 3443 /* now we can use the new window */
3444 mov %g1, %l4 3444 mov %g1, %l4
3445 ld [%l4 + 4], %l1 ! get new pc 3445 ld [%l4 + 4], %l1 ! get new pc
3446 ld [%l4 + 8], %l2 ! get new npc 3446 ld [%l4 + 8], %l2 ! get new npc
3447 ld [%l4 + 20], %g1 ! set new %g1 3447 ld [%l4 + 20], %g1 ! set new %g1
3448 3448
3449 /* set up returnee's out registers, including its %sp */ 3449 /* set up returnee's out registers, including its %sp */
3450 ldd [%l4 + 48], %i0 3450 ldd [%l4 + 48], %i0
3451 ldd [%l4 + 56], %i2 3451 ldd [%l4 + 56], %i2
3452 ldd [%l4 + 64], %i4 3452 ldd [%l4 + 64], %i4
3453 ldd [%l4 + 72], %i6 3453 ldd [%l4 + 72], %i6
3454 3454
3455 /* load returnee's window, making the window above it be invalid */ 3455 /* load returnee's window, making the window above it be invalid */
3456 restore 3456 restore
3457 restore %g0, 1, %l1 ! move to inval window and set %l1 = 1 3457 restore %g0, 1, %l1 ! move to inval window and set %l1 = 1
3458 rd %psr, %l0 3458 rd %psr, %l0
3459 srl %l1, %l0, %l1 3459 srl %l1, %l0, %l1
3460 wr %l1, 0, %wim ! %wim = 1 << (%psr & 31) 3460 wr %l1, 0, %wim ! %wim = 1 << (%psr & 31)
3461 sethi %hi(CPCB), %l1 3461 sethi %hi(CPCB), %l1
3462 LDPTR [%l1 + %lo(CPCB)], %l1 3462 LDPTR [%l1 + %lo(CPCB)], %l1
3463 and %l0, 31, %l0 ! CWP = %psr & 31; 3463 and %l0, 31, %l0 ! CWP = %psr & 31;
3464! st %l0, [%l1 + PCB_WIM] ! cpcb->pcb_wim = CWP; 3464! st %l0, [%l1 + PCB_WIM] ! cpcb->pcb_wim = CWP;
3465 save %g0, %g0, %g0 ! back to window to reload 3465 save %g0, %g0, %g0 ! back to window to reload
3466! LOADWIN(%sp) 3466! LOADWIN(%sp)
3467 save %g0, %g0, %g0 ! back to trap window 3467 save %g0, %g0, %g0 ! back to trap window
3468 /* note, we have not altered condition codes; safe to just rett */ 3468 /* note, we have not altered condition codes; safe to just rett */
3469 RETT 3469 RETT
3470#endif 3470#endif
3471 3471
3472/* 3472/*
3473 * syscall_setup() builds a trap frame and calls syscall(). 3473 * syscall_setup() builds a trap frame and calls syscall().
3474 * sun_syscall is same but delivers sun system call number 3474 * sun_syscall is same but delivers sun system call number
3475 * XXX should not have to save&reload ALL the registers just for 3475 * XXX should not have to save&reload ALL the registers just for
3476 * ptrace... 3476 * ptrace...
3477 */ 3477 */
3478syscall_setup: 3478syscall_setup:
3479#ifdef TRAPS_USE_IG 3479#ifdef TRAPS_USE_IG
3480 wrpr %g0, PSTATE_KERN|PSTATE_IG, %pstate ! DEBUG 3480 wrpr %g0, PSTATE_KERN|PSTATE_IG, %pstate ! DEBUG
3481#endif 3481#endif
3482 TRAP_SETUP(-CC64FSZ-TF_SIZE) 3482 TRAP_SETUP(-CC64FSZ-TF_SIZE)
3483 3483
3484#ifdef DEBUG 3484#ifdef DEBUG
3485 rdpr %tt, %o1 ! debug 3485 rdpr %tt, %o1 ! debug
3486 sth %o1, [%sp + CC64FSZ + STKB + TF_TT]! debug 3486 sth %o1, [%sp + CC64FSZ + STKB + TF_TT]! debug
3487#endif 3487#endif
3488 3488
3489 wrpr %g0, PSTATE_KERN, %pstate ! Get back to normal globals 3489 wrpr %g0, PSTATE_KERN, %pstate ! Get back to normal globals
3490 stx %g1, [%sp + CC64FSZ + STKB + TF_G + ( 1*8)] 3490 stx %g1, [%sp + CC64FSZ + STKB + TF_G + ( 1*8)]
3491 mov %g1, %o1 ! code 3491 mov %g1, %o1 ! code
3492 rdpr %tpc, %o2 ! (pc) 3492 rdpr %tpc, %o2 ! (pc)
3493 stx %g2, [%sp + CC64FSZ + STKB + TF_G + ( 2*8)] 3493 stx %g2, [%sp + CC64FSZ + STKB + TF_G + ( 2*8)]
3494 rdpr %tstate, %g1 3494 rdpr %tstate, %g1
3495 stx %g3, [%sp + CC64FSZ + STKB + TF_G + ( 3*8)] 3495 stx %g3, [%sp + CC64FSZ + STKB + TF_G + ( 3*8)]
3496 rdpr %tnpc, %o3 3496 rdpr %tnpc, %o3
3497 stx %g4, [%sp + CC64FSZ + STKB + TF_G + ( 4*8)] 3497 stx %g4, [%sp + CC64FSZ + STKB + TF_G + ( 4*8)]
3498 rd %y, %o4 3498 rd %y, %o4
3499 stx %g5, [%sp + CC64FSZ + STKB + TF_G + ( 5*8)] 3499 stx %g5, [%sp + CC64FSZ + STKB + TF_G + ( 5*8)]
3500 stx %g6, [%sp + CC64FSZ + STKB + TF_G + ( 6*8)] 3500 stx %g6, [%sp + CC64FSZ + STKB + TF_G + ( 6*8)]
3501 CHKPT(%g5,%g6,0x31) 3501 CHKPT(%g5,%g6,0x31)
3502 wrpr %g0, 0, %tl ! return to tl=0 3502 wrpr %g0, 0, %tl ! return to tl=0
3503 stx %g7, [%sp + CC64FSZ + STKB + TF_G + ( 7*8)] 3503 stx %g7, [%sp + CC64FSZ + STKB + TF_G + ( 7*8)]
3504 add %sp, CC64FSZ + STKB, %o0 ! (&tf) 3504 add %sp, CC64FSZ + STKB, %o0 ! (&tf)
3505 3505
3506 stx %g1, [%sp + CC64FSZ + STKB + TF_TSTATE] 3506 stx %g1, [%sp + CC64FSZ + STKB + TF_TSTATE]
3507 stx %o2, [%sp + CC64FSZ + STKB + TF_PC] 3507 stx %o2, [%sp + CC64FSZ + STKB + TF_PC]
3508 stx %o3, [%sp + CC64FSZ + STKB + TF_NPC] 3508 stx %o3, [%sp + CC64FSZ + STKB + TF_NPC]
3509 st %o4, [%sp + CC64FSZ + STKB + TF_Y] 3509 st %o4, [%sp + CC64FSZ + STKB + TF_Y]
3510 3510
3511 rdpr %pil, %g5 3511 rdpr %pil, %g5
3512 stb %g5, [%sp + CC64FSZ + STKB + TF_PIL] 3512 stb %g5, [%sp + CC64FSZ + STKB + TF_PIL]
3513 stb %g5, [%sp + CC64FSZ + STKB + TF_OLDPIL] 3513 stb %g5, [%sp + CC64FSZ + STKB + TF_OLDPIL]
3514 3514
3515 !! In the EMBEDANY memory model %g4 points to the start of the data segment. 3515 !! In the EMBEDANY memory model %g4 points to the start of the data segment.
3516 !! In our case we need to clear it before calling any C-code 3516 !! In our case we need to clear it before calling any C-code
3517 clr %g4 3517 clr %g4
3518 wr %g0, ASI_PRIMARY_NOFAULT, %asi ! Restore default ASI 3518 wr %g0, ASI_PRIMARY_NOFAULT, %asi ! Restore default ASI
3519 3519
3520 sethi %hi(CURLWP), %l1 3520 sethi %hi(CURLWP), %l1
3521 LDPTR [%l1 + %lo(CURLWP)], %l1 3521 LDPTR [%l1 + %lo(CURLWP)], %l1
3522 LDPTR [%l1 + L_PROC], %l1 ! now %l1 points to p 3522 LDPTR [%l1 + L_PROC], %l1 ! now %l1 points to p
3523 LDPTR [%l1 + P_MD_SYSCALL], %l1 3523 LDPTR [%l1 + P_MD_SYSCALL], %l1
3524 call %l1 3524 call %l1
3525 wrpr %g0, PSTATE_INTR, %pstate ! turn on interrupts 3525 wrpr %g0, PSTATE_INTR, %pstate ! turn on interrupts
3526 3526
3527 /* see `lwp_trampoline' for the reason for this label */ 3527 /* see `lwp_trampoline' for the reason for this label */
3528return_from_syscall: 3528return_from_syscall:
3529 wrpr %g0, PSTATE_KERN, %pstate ! Disable intterrupts 3529 wrpr %g0, PSTATE_KERN, %pstate ! Disable intterrupts
3530 CHKPT(%o1,%o2,0x32) 3530 CHKPT(%o1,%o2,0x32)
3531 wrpr %g0, 0, %tl ! Return to tl==0 3531 wrpr %g0, 0, %tl ! Return to tl==0
3532 CHKPT(%o1,%o2,4) 3532 CHKPT(%o1,%o2,4)
3533 ba,a,pt %icc, return_from_trap 3533 ba,a,pt %icc, return_from_trap
3534 nop 3534 nop
3535 NOTREACHED 3535 NOTREACHED
3536 3536
3537/* 3537/*
3538 * interrupt_vector: 3538 * interrupt_vector:
3539 * 3539 *
3540 * Spitfire chips never get level interrupts directly from H/W. 3540 * Spitfire chips never get level interrupts directly from H/W.
3541 * Instead, all interrupts come in as interrupt_vector traps. 3541 * Instead, all interrupts come in as interrupt_vector traps.
3542 * The interrupt number or handler address is an 11 bit number 3542 * The interrupt number or handler address is an 11 bit number
3543 * encoded in the first interrupt data word. Additional words 3543 * encoded in the first interrupt data word. Additional words
3544 * are application specific and used primarily for cross-calls. 3544 * are application specific and used primarily for cross-calls.
3545 * 3545 *
3546 * The interrupt vector handler then needs to identify the 3546 * The interrupt vector handler then needs to identify the
3547 * interrupt source from the interrupt number and arrange to 3547 * interrupt source from the interrupt number and arrange to
3548 * invoke the interrupt handler. This can either be done directly 3548 * invoke the interrupt handler. This can either be done directly
3549 * from here, or a softint at a particular level can be issued. 3549 * from here, or a softint at a particular level can be issued.
3550 * 3550 *
3551 * To call an interrupt directly and not overflow the trap stack, 3551 * To call an interrupt directly and not overflow the trap stack,
3552 * the trap registers should be saved on the stack, registers 3552 * the trap registers should be saved on the stack, registers
3553 * cleaned, trap-level decremented, the handler called, and then 3553 * cleaned, trap-level decremented, the handler called, and then
3554 * the process must be reversed. 3554 * the process must be reversed.
3555 * 3555 *
3556 * To simplify life all we do here is issue an appropriate softint. 3556 * To simplify life all we do here is issue an appropriate softint.
3557 * 3557 *
3558 * Note: It is impossible to identify or change a device's 3558 * Note: It is impossible to identify or change a device's
3559 * interrupt number until it is probed. That's the 3559 * interrupt number until it is probed. That's the
3560 * purpose for all the funny interrupt acknowledge 3560 * purpose for all the funny interrupt acknowledge
3561 * code. 3561 * code.
3562 * 3562 *
3563 */ 3563 */
3564 3564
3565/* 3565/*
3566 * Vectored interrupts: 3566 * Vectored interrupts:
3567 * 3567 *
3568 * When an interrupt comes in, interrupt_vector uses the interrupt 3568 * When an interrupt comes in, interrupt_vector uses the interrupt
3569 * vector number to lookup the appropriate intrhand from the intrlev 3569 * vector number to lookup the appropriate intrhand from the intrlev
3570 * array. It then looks up the interrupt level from the intrhand 3570 * array. It then looks up the interrupt level from the intrhand
3571 * structure. It uses the level to index the intrpending array, 3571 * structure. It uses the level to index the intrpending array,
3572 * which is 8 slots for each possible interrupt level (so we can 3572 * which is 8 slots for each possible interrupt level (so we can
3573 * shift instead of multiply for address calculation). It hunts for 3573 * shift instead of multiply for address calculation). It hunts for
3574 * any available slot at that level. Available slots are NULL. 3574 * any available slot at that level. Available slots are NULL.
3575 * 3575 *
3576 * Then interrupt_vector uses the interrupt level in the intrhand 3576 * Then interrupt_vector uses the interrupt level in the intrhand
3577 * to issue a softint of the appropriate level. The softint handler 3577 * to issue a softint of the appropriate level. The softint handler
3578 * figures out what level interrupt it's handling and pulls the first 3578 * figures out what level interrupt it's handling and pulls the first
3579 * intrhand pointer out of the intrpending array for that interrupt 3579 * intrhand pointer out of the intrpending array for that interrupt
3580 * level, puts a NULL in its place, clears the interrupt generator, 3580 * level, puts a NULL in its place, clears the interrupt generator,
3581 * and invokes the interrupt handler. 3581 * and invokes the interrupt handler.
3582 */ 3582 */
3583 3583
3584/* intrpending array is now in per-CPU structure. */ 3584/* intrpending array is now in per-CPU structure. */
3585 3585
3586#ifdef DEBUG 3586#ifdef DEBUG
3587#define INTRDEBUG_VECTOR 0x1 3587#define INTRDEBUG_VECTOR 0x1
3588#define INTRDEBUG_LEVEL 0x2 3588#define INTRDEBUG_LEVEL 0x2
3589#define INTRDEBUG_FUNC 0x4 3589#define INTRDEBUG_FUNC 0x4
3590#define INTRDEBUG_SPUR 0x8 3590#define INTRDEBUG_SPUR 0x8
3591 .data 3591 .data
3592 .globl _C_LABEL(intrdebug) 3592 .globl _C_LABEL(intrdebug)
3593_C_LABEL(intrdebug): .word 0x0 3593_C_LABEL(intrdebug): .word 0x0
3594/* 3594/*
3595 * Note: we use the local label `97' to branch forward to, to skip 3595 * Note: we use the local label `97' to branch forward to, to skip
3596 * actual debugging code following a `intrdebug' bit test. 3596 * actual debugging code following a `intrdebug' bit test.
3597 */ 3597 */
3598#endif 3598#endif
3599 .text 3599 .text
3600interrupt_vector: 3600interrupt_vector:
3601#ifdef TRAPSTATS 3601#ifdef TRAPSTATS
3602 set _C_LABEL(kiveccnt), %g1 3602 set _C_LABEL(kiveccnt), %g1
3603 set _C_LABEL(iveccnt), %g2 3603 set _C_LABEL(iveccnt), %g2
3604 rdpr %tl, %g3 3604 rdpr %tl, %g3
3605 dec %g3 3605 dec %g3
3606 movrz %g3, %g2, %g1 3606 movrz %g3, %g2, %g1
3607 lduw [%g1], %g2 3607 lduw [%g1], %g2
3608 inc %g2 3608 inc %g2
3609 stw %g2, [%g1] 3609 stw %g2, [%g1]
3610#endif 3610#endif
3611 ldxa [%g0] ASI_IRSR, %g1 3611 ldxa [%g0] ASI_IRSR, %g1
3612 mov IRDR_0H, %g7 3612 mov IRDR_0H, %g7
3613 ldxa [%g7] ASI_IRDR, %g7 ! Get interrupt number 3613 ldxa [%g7] ASI_IRDR, %g7 ! Get interrupt number
3614 membar #Sync 3614 membar #Sync
3615 3615
3616#if KTR_COMPILE & KTR_INTR 3616#if KTR_COMPILE & KTR_INTR
3617 CATR(KTR_TRAP, "interrupt_vector: tl %d ASI_IRSR %p ASI_IRDR %p", 3617 CATR(KTR_TRAP, "interrupt_vector: tl %d ASI_IRSR %p ASI_IRDR %p",
3618 %g3, %g5, %g6, 10, 11, 12) 3618 %g3, %g5, %g6, 10, 11, 12)
3619 rdpr %tl, %g5 3619 rdpr %tl, %g5
3620 stx %g5, [%g3 + KTR_PARM1] 3620 stx %g5, [%g3 + KTR_PARM1]
3621 stx %g1, [%g3 + KTR_PARM2] 3621 stx %g1, [%g3 + KTR_PARM2]
3622 stx %g7, [%g3 + KTR_PARM3] 3622 stx %g7, [%g3 + KTR_PARM3]
362312: 362312:
3624#endif 3624#endif
3625 3625
3626 btst IRSR_BUSY, %g1 3626 btst IRSR_BUSY, %g1
3627 bz,pn %icc, 3f ! spurious interrupt 3627 bz,pn %icc, 3f ! spurious interrupt
3628#ifdef MULTIPROCESSOR 3628#ifdef MULTIPROCESSOR
3629 sethi %hi(KERNBASE), %g1 3629 sethi %hi(KERNBASE), %g1
3630 3630
3631 cmp %g7, %g1 3631 cmp %g7, %g1
3632 bl,pt %xcc, Lsoftint_regular ! >= KERNBASE is a fast cross-call 3632 bl,pt %xcc, Lsoftint_regular ! >= KERNBASE is a fast cross-call
3633 cmp %g7, MAXINTNUM 3633 cmp %g7, MAXINTNUM
3634 3634
3635 mov IRDR_1H, %g2 3635 mov IRDR_1H, %g2
3636 ldxa [%g2] ASI_IRDR, %g2 ! Get IPI handler argument 1 3636 ldxa [%g2] ASI_IRDR, %g2 ! Get IPI handler argument 1
3637 mov IRDR_2H, %g3 3637 mov IRDR_2H, %g3
3638 ldxa [%g3] ASI_IRDR, %g3 ! Get IPI handler argument 2 3638 ldxa [%g3] ASI_IRDR, %g3 ! Get IPI handler argument 2
3639 3639
3640 stxa %g0, [%g0] ASI_IRSR ! Ack IRQ 3640 stxa %g0, [%g0] ASI_IRSR ! Ack IRQ
3641 membar #Sync ! Should not be needed due to retry 3641 membar #Sync ! Should not be needed due to retry
3642 3642
3643 jmpl %g7, %g0 3643 jmpl %g7, %g0
3644 nop 3644 nop
3645#else 3645#else
3646 cmp %g7, MAXINTNUM 3646 cmp %g7, MAXINTNUM
3647#endif 3647#endif
3648 3648
3649Lsoftint_regular: 3649Lsoftint_regular:
3650 stxa %g0, [%g0] ASI_IRSR ! Ack IRQ 3650 stxa %g0, [%g0] ASI_IRSR ! Ack IRQ
3651 membar #Sync ! Should not be needed due to retry 3651 membar #Sync ! Should not be needed due to retry
3652 sllx %g7, PTRSHFT, %g5 ! Calculate entry number 3652 sllx %g7, PTRSHFT, %g5 ! Calculate entry number
3653 sethi %hi(_C_LABEL(intrlev)), %g3 3653 sethi %hi(_C_LABEL(intrlev)), %g3
3654 bgeu,pn %xcc, 3f 3654 bgeu,pn %xcc, 3f
3655 or %g3, %lo(_C_LABEL(intrlev)), %g3 3655 or %g3, %lo(_C_LABEL(intrlev)), %g3
3656 LDPTR [%g3 + %g5], %g5 ! We have a pointer to the handler 3656 LDPTR [%g3 + %g5], %g5 ! We have a pointer to the handler
3657 brz,pn %g5, 3f ! NULL means it isn't registered yet. Skip it. 3657 brz,pn %g5, 3f ! NULL means it isn't registered yet. Skip it.
3658 nop 3658 nop
3659 3659
3660setup_sparcintr: 3660setup_sparcintr:
3661 LDPTR [%g5+IH_PEND], %g6 ! Read pending flag 3661 LDPTR [%g5+IH_PEND], %g6 ! Read pending flag
3662 brnz,pn %g6, ret_from_intr_vector ! Skip it if it's running 3662 brnz,pn %g6, ret_from_intr_vector ! Skip it if it's running
3663 ldub [%g5+IH_PIL], %g6 ! Read interrupt mask 3663 ldub [%g5+IH_PIL], %g6 ! Read interrupt mask
3664 sethi %hi(CPUINFO_VA+CI_INTRPENDING), %g1 3664 sethi %hi(CPUINFO_VA+CI_INTRPENDING), %g1
3665 sll %g6, PTRSHFT, %g3 ! Find start of table for this IPL 3665 sll %g6, PTRSHFT, %g3 ! Find start of table for this IPL
3666 or %g1, %lo(CPUINFO_VA+CI_INTRPENDING), %g1 3666 or %g1, %lo(CPUINFO_VA+CI_INTRPENDING), %g1
3667 add %g1, %g3, %g1 3667 add %g1, %g3, %g1
36681: 36681:
3669 LDPTR [%g1], %g3 ! Load list head 3669 LDPTR [%g1], %g3 ! Load list head
3670 STPTR %g3, [%g5+IH_PEND] ! Link our intrhand node in 3670 STPTR %g3, [%g5+IH_PEND] ! Link our intrhand node in
3671 mov %g5, %g7 3671 mov %g5, %g7
3672 CASPTR [%g1] ASI_N, %g3, %g7 3672 CASPTR [%g1] ASI_N, %g3, %g7
3673 cmp %g7, %g3 ! Did it work? 3673 cmp %g7, %g3 ! Did it work?
3674 bne,pn CCCR, 1b ! No, try again 3674 bne,pn CCCR, 1b ! No, try again
3675 EMPTY 3675 EMPTY
36762: 36762:
3677#ifdef NOT_DEBUG 3677#ifdef NOT_DEBUG
3678 set _C_LABEL(intrdebug), %g7 3678 set _C_LABEL(intrdebug), %g7
3679 ld [%g7], %g7 3679 ld [%g7], %g7
3680 btst INTRDEBUG_VECTOR, %g7 3680 btst INTRDEBUG_VECTOR, %g7
3681 bz,pt %icc, 97f 3681 bz,pt %icc, 97f
3682 nop 3682 nop
3683 3683
3684 cmp %g6, 0xa ! ignore clock interrupts? 3684 cmp %g6, 0xa ! ignore clock interrupts?
3685 bz,pt %icc, 97f 3685 bz,pt %icc, 97f
3686 nop 3686 nop
3687 3687
3688 STACKFRAME(-CC64FSZ) ! Get a clean register window 3688 STACKFRAME(-CC64FSZ) ! Get a clean register window
3689 LOAD_ASCIZ(%o0,\ 3689 LOAD_ASCIZ(%o0,\
3690 "interrupt_vector: number %lx softint mask %lx pil %lu slot %p\r\n") 3690 "interrupt_vector: number %lx softint mask %lx pil %lu slot %p\r\n")
3691 mov %g2, %o1 3691 mov %g2, %o1
3692 rdpr %pil, %o3 3692 rdpr %pil, %o3
3693 mov %g1, %o4 3693 mov %g1, %o4
3694 GLOBTOLOC 3694 GLOBTOLOC
3695 clr %g4 3695 clr %g4
3696 call prom_printf 3696 call prom_printf
3697 mov %g6, %o2 3697 mov %g6, %o2
3698 LOCTOGLOB 3698 LOCTOGLOB
3699 restore 3699 restore
370097: 370097:
3701#endif 3701#endif
3702 mov 1, %g7 3702 mov 1, %g7
3703 sll %g7, %g6, %g6 3703 sll %g7, %g6, %g6
3704 wr %g6, 0, SET_SOFTINT ! Invoke a softint 3704 wr %g6, 0, SET_SOFTINT ! Invoke a softint
3705 3705
3706ret_from_intr_vector: 3706ret_from_intr_vector:
3707#if KTR_COMPILE & KTR_INTR 3707#if KTR_COMPILE & KTR_INTR
3708 CATR(KTR_TRAP, "ret_from_intr_vector: tl %d, tstate %p, tpc %p", 3708 CATR(KTR_TRAP, "ret_from_intr_vector: tl %d, tstate %p, tpc %p",
3709 %g3, %g4, %g5, 10, 11, 12) 3709 %g3, %g4, %g5, 10, 11, 12)
3710 rdpr %tl, %g5 3710 rdpr %tl, %g5
3711 stx %g5, [%g3 + KTR_PARM1] 3711 stx %g5, [%g3 + KTR_PARM1]
3712 rdpr %tstate, %g5 3712 rdpr %tstate, %g5
3713 stx %g5, [%g3 + KTR_PARM2] 3713 stx %g5, [%g3 + KTR_PARM2]
3714 rdpr %tpc, %g5 3714 rdpr %tpc, %g5
3715 stx %g5, [%g3 + KTR_PARM3] 3715 stx %g5, [%g3 + KTR_PARM3]
371612: 371612:
3717#endif 3717#endif
3718 retry 3718 retry
3719 NOTREACHED 3719 NOTREACHED
3720 3720
37213: 37213:
3722#ifdef NOT_DEBUG /* always do this */ 3722#ifdef NOT_DEBUG /* always do this */
3723 set _C_LABEL(intrdebug), %g6 3723 set _C_LABEL(intrdebug), %g6
3724 ld [%g6], %g6 3724 ld [%g6], %g6
3725 btst INTRDEBUG_SPUR, %g6 3725 btst INTRDEBUG_SPUR, %g6
3726 bz,pt %icc, 97f 3726 bz,pt %icc, 97f
3727 nop 3727 nop
3728#endif 3728#endif
3729#if 1 3729#if 1
3730 STACKFRAME(-CC64FSZ) ! Get a clean register window 3730 STACKFRAME(-CC64FSZ) ! Get a clean register window
3731 LOAD_ASCIZ(%o0, "interrupt_vector: spurious vector %lx at pil %d\r\n") 3731 LOAD_ASCIZ(%o0, "interrupt_vector: spurious vector %lx at pil %d\r\n")
3732 mov %g7, %o1 3732 mov %g7, %o1
3733 GLOBTOLOC 3733 GLOBTOLOC
3734 clr %g4 3734 clr %g4
3735 call prom_printf 3735 call prom_printf
3736 rdpr %pil, %o2 3736 rdpr %pil, %o2
3737 LOCTOGLOB 3737 LOCTOGLOB
3738 restore 3738 restore
373997: 373997:
3740#endif 3740#endif
3741 ba,a ret_from_intr_vector 3741 ba,a ret_from_intr_vector
3742 nop ! XXX spitfire bug? 3742 nop ! XXX spitfire bug?
3743 3743
3744#if defined(MULTIPROCESSOR) 3744#if defined(MULTIPROCESSOR)
3745/* 3745/*
3746 * IPI handler to do nothing, but causes rescheduling.. 3746 * IPI handler to do nothing, but causes rescheduling..
3747 * void sparc64_ipi_nop(void *); 3747 * void sparc64_ipi_nop(void *);
3748 */ 3748 */
3749ENTRY(sparc64_ipi_nop) 3749ENTRY(sparc64_ipi_nop)
3750 ba,a ret_from_intr_vector 3750 ba,a ret_from_intr_vector
3751 nop 3751 nop
3752 3752
3753/* 3753/*
3754 * IPI handler to halt the CPU. Just calls the C vector. 3754 * IPI handler to halt the CPU. Just calls the C vector.
3755 * void sparc64_ipi_halt(void *); 3755 * void sparc64_ipi_halt(void *);
3756 */ 3756 */
3757ENTRY(sparc64_ipi_halt) 3757ENTRY(sparc64_ipi_halt)
3758 call _C_LABEL(sparc64_ipi_halt_thiscpu) 3758 call _C_LABEL(sparc64_ipi_halt_thiscpu)
3759 clr %g4 3759 clr %g4
3760 sir 3760 sir
3761 3761
3762/* 3762/*
3763 * IPI handler to pause the CPU. We just trap to the debugger if it 3763 * IPI handler to pause the CPU. We just trap to the debugger if it
3764 * is configured, otherwise just return. 3764 * is configured, otherwise just return.
3765 */ 3765 */
3766ENTRY(sparc64_ipi_pause) 3766ENTRY(sparc64_ipi_pause)
3767#if defined(DDB) 3767#if defined(DDB)
3768sparc64_ipi_pause_trap_point: 3768sparc64_ipi_pause_trap_point:
3769 ta 1 3769 ta 1
3770 nop 3770 nop
3771#endif 3771#endif
3772 ba,a ret_from_intr_vector 3772 ba,a ret_from_intr_vector
3773 nop 3773 nop
3774 3774
3775/* 3775/*
3776 * Increment IPI event counter, defined in machine/{cpu,intr}.h. 3776 * Increment IPI event counter, defined in machine/{cpu,intr}.h.
3777 */ 3777 */
3778#define IPIEVC_INC(n,r1,r2) \ 3778#define IPIEVC_INC(n,r1,r2) \
3779 sethi %hi(CPUINFO_VA+CI_IPIEVC+EVC_SIZE*n), r2; \ 3779 sethi %hi(CPUINFO_VA+CI_IPIEVC+EVC_SIZE*n), r2; \
3780 ldx [r2 + %lo(CPUINFO_VA+CI_IPIEVC+EVC_SIZE*n)], r1; \ 3780 ldx [r2 + %lo(CPUINFO_VA+CI_IPIEVC+EVC_SIZE*n)], r1; \
3781 inc r1; \ 3781 inc r1; \
3782 stx r1, [r2 + %lo(CPUINFO_VA+CI_IPIEVC+EVC_SIZE*n)] 3782 stx r1, [r2 + %lo(CPUINFO_VA+CI_IPIEVC+EVC_SIZE*n)]
3783 3783
3784/* 3784/*
3785 * IPI handler to flush single pte. 3785 * IPI handler to flush single pte.
3786 * void sparc64_ipi_flush_pte(void *); 3786 * void sparc64_ipi_flush_pte(void *);
3787 * 3787 *
3788 * On entry: 3788 * On entry:
3789 * %g2 = vaddr_t va 3789 * %g2 = vaddr_t va
3790 * %g3 = int ctx 3790 * %g3 = int ctx
3791 */ 3791 */
3792ENTRY(sparc64_ipi_flush_pte) 3792ENTRY(sparc64_ipi_flush_pte)
3793#if KTR_COMPILE & KTR_PMAP 3793#if KTR_COMPILE & KTR_PMAP
3794 CATR(KTR_TRAP, "sparc64_ipi_flush_pte:", 3794 CATR(KTR_TRAP, "sparc64_ipi_flush_pte:",
3795 %g1, %g3, %g4, 10, 11, 12) 3795 %g1, %g3, %g4, 10, 11, 12)
379612: 379612:
3797#endif 3797#endif
3798#ifdef SPITFIRE 3798#ifdef SPITFIRE
3799 srlx %g2, PG_SHIFT4U, %g2 ! drop unused va bits 3799 srlx %g2, PG_SHIFT4U, %g2 ! drop unused va bits
3800 mov CTX_SECONDARY, %g5 3800 mov CTX_SECONDARY, %g5
3801 sllx %g2, PG_SHIFT4U, %g2 3801 sllx %g2, PG_SHIFT4U, %g2
3802 ldxa [%g5] ASI_DMMU, %g6 ! Save secondary context 3802 ldxa [%g5] ASI_DMMU, %g6 ! Save secondary context
3803 sethi %hi(KERNBASE), %g7 3803 sethi %hi(KERNBASE), %g7
3804 membar #LoadStore 3804 membar #LoadStore
3805 stxa %g3, [%g5] ASI_DMMU ! Insert context to demap 3805 stxa %g3, [%g5] ASI_DMMU ! Insert context to demap
3806 membar #Sync 3806 membar #Sync
3807 or %g2, DEMAP_PAGE_SECONDARY, %g2 ! Demap page from secondary context only 3807 or %g2, DEMAP_PAGE_SECONDARY, %g2 ! Demap page from secondary context only
3808 stxa %g2, [%g2] ASI_DMMU_DEMAP ! Do the demap 3808 stxa %g2, [%g2] ASI_DMMU_DEMAP ! Do the demap
3809 stxa %g2, [%g2] ASI_IMMU_DEMAP ! to both TLBs 3809 stxa %g2, [%g2] ASI_IMMU_DEMAP ! to both TLBs
3810#ifdef _LP64 3810#ifdef _LP64
3811 srl %g2, 0, %g2 ! and make sure it's both 32- and 64-bit entries 3811 srl %g2, 0, %g2 ! and make sure it's both 32- and 64-bit entries
3812 stxa %g2, [%g2] ASI_DMMU_DEMAP ! Do the demap 3812 stxa %g2, [%g2] ASI_DMMU_DEMAP ! Do the demap
3813 stxa %g2, [%g2] ASI_IMMU_DEMAP ! Do the demap 3813 stxa %g2, [%g2] ASI_IMMU_DEMAP ! Do the demap
3814#endif 3814#endif
3815 flush %g7 3815 flush %g7
3816 stxa %g6, [%g5] ASI_DMMU ! Restore secondary context 3816 stxa %g6, [%g5] ASI_DMMU ! Restore secondary context
3817 membar #Sync 3817 membar #Sync
3818 IPIEVC_INC(IPI_EVCNT_TLB_PTE,%g2,%g3) 3818 IPIEVC_INC(IPI_EVCNT_TLB_PTE,%g2,%g3)
3819#else 3819#else
3820 WRITEME 3820
 3821 andn %g2, 0xfff, %g2 ! drop unused va bits
 3822 mov CTX_PRIMARY, %g5
 3823 ldxa [%g5] ASI_DMMU, %g6 ! Save secondary context
 3824 sethi %hi(KERNBASE), %g7
 3825 membar #LoadStore
 3826 stxa %g3, [%g5] ASI_DMMU ! Insert context to demap
 3827 membar #Sync
 3828 or %g2, DEMAP_PAGE_PRIMARY, %g2
 3829 stxa %g2, [%g2] ASI_DMMU_DEMAP ! Do the demap
 3830 stxa %g2, [%g2] ASI_IMMU_DEMAP ! to both TLBs
 3831#ifdef _LP64
 3832 srl %g2, 0, %g2 ! and make sure it's both 32- and 64-bit entries
 3833 stxa %g2, [%g2] ASI_DMMU_DEMAP ! Do the demap
 3834 stxa %g2, [%g2] ASI_IMMU_DEMAP ! Do the demap
 3835#endif
 3836 flush %g7
 3837 stxa %g6, [%g5] ASI_DMMU ! Restore primary context
 3838 membar #Sync
 3839 IPIEVC_INC(IPI_EVCNT_TLB_PTE,%g2,%g3)
3821#endif 3840#endif
3822  3841
3823 ba,a ret_from_intr_vector 3842 ba,a ret_from_intr_vector
3824 nop 3843 nop
3825 3844
3826 3845
3827/* 3846/*
3828 * Secondary CPU bootstrap code. 3847 * Secondary CPU bootstrap code.
3829 */ 3848 */
3830 .text 3849 .text
3831 .align 32 3850 .align 32
38321: rd %pc, %l0 38511: rd %pc, %l0
3833 LDULNG [%l0 + (4f-1b)], %l1 3852 LDULNG [%l0 + (4f-1b)], %l1
3834 add %l0, (6f-1b), %l2 3853 add %l0, (6f-1b), %l2
3835 clr %l3 3854 clr %l3
38362: cmp %l3, %l1 38552: cmp %l3, %l1
3837 be CCCR, 3f 3856 be CCCR, 3f
3838 nop 3857 nop
3839 ldx [%l2 + TTE_VPN], %l4 3858 ldx [%l2 + TTE_VPN], %l4
3840 ldx [%l2 + TTE_DATA], %l5 3859 ldx [%l2 + TTE_DATA], %l5
3841 wr %g0, ASI_DMMU, %asi 3860 wr %g0, ASI_DMMU, %asi
3842 stxa %l4, [%g0 + TLB_TAG_ACCESS] %asi 3861 stxa %l4, [%g0 + TLB_TAG_ACCESS] %asi
3843 stxa %l5, [%g0] ASI_DMMU_DATA_IN 3862 stxa %l5, [%g0] ASI_DMMU_DATA_IN
3844 wr %g0, ASI_IMMU, %asi 3863 wr %g0, ASI_IMMU, %asi
3845 stxa %l4, [%g0 + TLB_TAG_ACCESS] %asi 3864 stxa %l4, [%g0 + TLB_TAG_ACCESS] %asi
3846 stxa %l5, [%g0] ASI_IMMU_DATA_IN 3865 stxa %l5, [%g0] ASI_IMMU_DATA_IN
3847 membar #Sync 3866 membar #Sync
3848 flush %l4 3867 flush %l4
3849 add %l2, PTE_SIZE, %l2 3868 add %l2, PTE_SIZE, %l2
3850 add %l3, 1, %l3 3869 add %l3, 1, %l3
3851 ba %xcc, 2b 3870 ba %xcc, 2b
3852 nop 3871 nop
38533: LDULNG [%l0 + (5f-1b)], %l1 38723: LDULNG [%l0 + (5f-1b)], %l1
3854 LDULNG [%l0 + (7f-1b)], %g2 ! Load cpu_info address. 3873 LDULNG [%l0 + (7f-1b)], %g2 ! Load cpu_info address.
3855 jmpl %l1, %g0 3874 jmpl %l1, %g0
3856 nop 3875 nop
3857 3876
3858 .align PTRSZ 3877 .align PTRSZ
38594: ULONG 0x0 38784: ULONG 0x0
38605: ULONG 0x0 38795: ULONG 0x0
38617: ULONG 0x0 38807: ULONG 0x0
3862 _ALIGN 3881 _ALIGN
38636: 38826:
3864 3883
3865#define DATA(name) \ 3884#define DATA(name) \
3866 .data ; \ 3885 .data ; \
3867 .align PTRSZ ; \ 3886 .align PTRSZ ; \
3868 .globl name ; \ 3887 .globl name ; \
3869name: 3888name:
3870 3889
3871DATA(mp_tramp_code) 3890DATA(mp_tramp_code)
3872 POINTER 1b 3891 POINTER 1b
3873DATA(mp_tramp_code_len) 3892DATA(mp_tramp_code_len)
3874 ULONG 6b-1b 3893 ULONG 6b-1b
3875DATA(mp_tramp_tlb_slots) 3894DATA(mp_tramp_tlb_slots)
3876 ULONG 4b-1b 3895 ULONG 4b-1b
3877DATA(mp_tramp_func) 3896DATA(mp_tramp_func)
3878 ULONG 5b-1b 3897 ULONG 5b-1b
3879DATA(mp_tramp_ci) 3898DATA(mp_tramp_ci)
3880 ULONG 7b-1b 3899 ULONG 7b-1b
3881 3900
3882 .text 3901 .text
3883 .align 32 3902 .align 32
3884#endif /* MULTIPROCESSOR */ 3903#endif /* MULTIPROCESSOR */
3885 3904
3886/* 3905/*
3887 * Ultra1 and Ultra2 CPUs use soft interrupts for everything. What we do 3906 * Ultra1 and Ultra2 CPUs use soft interrupts for everything. What we do
3888 * on a soft interrupt, is we should check which bits in ASR_SOFTINT(0x16) 3907 * on a soft interrupt, is we should check which bits in ASR_SOFTINT(0x16)
3889 * are set, handle those interrupts, then clear them by setting the 3908 * are set, handle those interrupts, then clear them by setting the
3890 * appropriate bits in ASR_CLEAR_SOFTINT(0x15). 3909 * appropriate bits in ASR_CLEAR_SOFTINT(0x15).
3891 * 3910 *
3892 * We have an array of 8 interrupt vector slots for each of 15 interrupt 3911 * We have an array of 8 interrupt vector slots for each of 15 interrupt
3893 * levels. If a vectored interrupt can be dispatched, the dispatch 3912 * levels. If a vectored interrupt can be dispatched, the dispatch
3894 * routine will place a pointer to an intrhand structure in one of 3913 * routine will place a pointer to an intrhand structure in one of
3895 * the slots. The interrupt handler will go through the list to look 3914 * the slots. The interrupt handler will go through the list to look
3896 * for an interrupt to dispatch. If it finds one it will pull it off 3915 * for an interrupt to dispatch. If it finds one it will pull it off
3897 * the list, free the entry, and call the handler. The code is like 3916 * the list, free the entry, and call the handler. The code is like
3898 * this: 3917 * this:
3899 * 3918 *
3900 * for (i=0; i<8; i++) 3919 * for (i=0; i<8; i++)
3901 * if (ih = intrpending[intlev][i]) { 3920 * if (ih = intrpending[intlev][i]) {
3902 * intrpending[intlev][i] = NULL; 3921 * intrpending[intlev][i] = NULL;
3903 * if ((*ih->ih_fun)(ih->ih_arg ? ih->ih_arg : &frame)) 3922 * if ((*ih->ih_fun)(ih->ih_arg ? ih->ih_arg : &frame))
3904 * return; 3923 * return;
3905 * strayintr(&frame); 3924 * strayintr(&frame);
3906 * return; 3925 * return;
3907 * } 3926 * }
3908 * 3927 *
3909 * Otherwise we go back to the old style of polled interrupts. 3928 * Otherwise we go back to the old style of polled interrupts.
3910 * 3929 *
3911 * After preliminary setup work, the interrupt is passed to each 3930 * After preliminary setup work, the interrupt is passed to each
3912 * registered handler in turn. These are expected to return nonzero if 3931 * registered handler in turn. These are expected to return nonzero if
3913 * they took care of the interrupt. If a handler claims the interrupt, 3932 * they took care of the interrupt. If a handler claims the interrupt,
3914 * we exit (hardware interrupts are latched in the requestor so we'll 3933 * we exit (hardware interrupts are latched in the requestor so we'll
3915 * just take another interrupt in the unlikely event of simultaneous 3934 * just take another interrupt in the unlikely event of simultaneous
3916 * interrupts from two different devices at the same level). If we go 3935 * interrupts from two different devices at the same level). If we go
3917 * through all the registered handlers and no one claims it, we report a 3936 * through all the registered handlers and no one claims it, we report a
3918 * stray interrupt. This is more or less done as: 3937 * stray interrupt. This is more or less done as:
3919 * 3938 *
3920 * for (ih = intrhand[intlev]; ih; ih = ih->ih_next) 3939 * for (ih = intrhand[intlev]; ih; ih = ih->ih_next)
3921 * if ((*ih->ih_fun)(ih->ih_arg ? ih->ih_arg : &frame)) 3940 * if ((*ih->ih_fun)(ih->ih_arg ? ih->ih_arg : &frame))
3922 * return; 3941 * return;
3923 * strayintr(&frame); 3942 * strayintr(&frame);
3924 * 3943 *
3925 * Inputs: 3944 * Inputs:
3926 * %l0 = %tstate 3945 * %l0 = %tstate
3927 * %l1 = return pc 3946 * %l1 = return pc
3928 * %l2 = return npc 3947 * %l2 = return npc
3929 * %l3 = interrupt level 3948 * %l3 = interrupt level
3930 * (software interrupt only) %l4 = bits to clear in interrupt register 3949 * (software interrupt only) %l4 = bits to clear in interrupt register
3931 * 3950 *
3932 * Internal: 3951 * Internal:
3933 * %l4, %l5: local variables 3952 * %l4, %l5: local variables
3934 * %l6 = %y 3953 * %l6 = %y
3935 * %l7 = %g1 3954 * %l7 = %g1
3936 * %g2..%g7 go to stack 3955 * %g2..%g7 go to stack
3937 * 3956 *
3938 * An interrupt frame is built in the space for a full trapframe; 3957 * An interrupt frame is built in the space for a full trapframe;
3939 * this contains the psr, pc, npc, and interrupt level. 3958 * this contains the psr, pc, npc, and interrupt level.
3940 * 3959 *
3941 * The level of this interrupt is determined by: 3960 * The level of this interrupt is determined by:
3942 * 3961 *
3943 * IRQ# = %tt - 0x40 3962 * IRQ# = %tt - 0x40
3944 */ 3963 */
3945 3964
3946ENTRY_NOPROFILE(sparc_interrupt) 3965ENTRY_NOPROFILE(sparc_interrupt)
3947#ifdef TRAPS_USE_IG 3966#ifdef TRAPS_USE_IG
3948 ! This is for interrupt debugging 3967 ! This is for interrupt debugging
3949 wrpr %g0, PSTATE_KERN|PSTATE_IG, %pstate ! DEBUG 3968 wrpr %g0, PSTATE_KERN|PSTATE_IG, %pstate ! DEBUG
3950#endif 3969#endif
3951 /* 3970 /*
3952 * If this is a %tick softint, clear it then call interrupt_vector. 3971 * If this is a %tick softint, clear it then call interrupt_vector.
3953 */ 3972 */
3954 rd SOFTINT, %g1 3973 rd SOFTINT, %g1
3955 btst 1, %g1 3974 btst 1, %g1
3956 bz,pt %icc, 0f 3975 bz,pt %icc, 0f
3957 sethi %hi(CPUINFO_VA+CI_TICK_IH), %g3 3976 sethi %hi(CPUINFO_VA+CI_TICK_IH), %g3
3958 wr %g0, 1, CLEAR_SOFTINT 3977 wr %g0, 1, CLEAR_SOFTINT
3959 ba,pt %icc, setup_sparcintr 3978 ba,pt %icc, setup_sparcintr
3960 LDPTR [%g3 + %lo(CPUINFO_VA+CI_TICK_IH)], %g5 3979 LDPTR [%g3 + %lo(CPUINFO_VA+CI_TICK_IH)], %g5
39610: 39800:
3962 3981
3963 ! Increment the per-cpu interrupt level 3982 ! Increment the per-cpu interrupt level
3964 sethi %hi(CPUINFO_VA+CI_IDEPTH), %g1 3983 sethi %hi(CPUINFO_VA+CI_IDEPTH), %g1
3965 ld [%g1 + %lo(CPUINFO_VA+CI_IDEPTH)], %g2 3984 ld [%g1 + %lo(CPUINFO_VA+CI_IDEPTH)], %g2
3966 inc %g2 3985 inc %g2
3967 st %g2, [%g1 + %lo(CPUINFO_VA+CI_IDEPTH)] 3986 st %g2, [%g1 + %lo(CPUINFO_VA+CI_IDEPTH)]
3968 3987
3969#ifdef TRAPSTATS 3988#ifdef TRAPSTATS
3970 sethi %hi(_C_LABEL(kintrcnt)), %g1 3989 sethi %hi(_C_LABEL(kintrcnt)), %g1
3971 sethi %hi(_C_LABEL(uintrcnt)), %g2 3990 sethi %hi(_C_LABEL(uintrcnt)), %g2
3972 or %g1, %lo(_C_LABEL(kintrcnt)), %g1 3991 or %g1, %lo(_C_LABEL(kintrcnt)), %g1
3973 or %g1, %lo(_C_LABEL(uintrcnt)), %g2 3992 or %g1, %lo(_C_LABEL(uintrcnt)), %g2
3974 rdpr %tl, %g3 3993 rdpr %tl, %g3
3975 dec %g3 3994 dec %g3
3976 movrz %g3, %g2, %g1 3995 movrz %g3, %g2, %g1
3977 lduw [%g1], %g2 3996 lduw [%g1], %g2
3978 inc %g2 3997 inc %g2
3979 stw %g2, [%g1] 3998 stw %g2, [%g1]
3980 /* See if we're on the interrupt stack already. */ 3999 /* See if we're on the interrupt stack already. */
3981 set EINTSTACK, %g2 4000 set EINTSTACK, %g2
3982 set (EINTSTACK-INTSTACK), %g1 4001 set (EINTSTACK-INTSTACK), %g1
3983 btst 1, %sp 4002 btst 1, %sp
3984 add %sp, BIAS, %g3 4003 add %sp, BIAS, %g3
3985 movz %icc, %sp, %g3 4004 movz %icc, %sp, %g3
3986 srl %g3, 0, %g3 4005 srl %g3, 0, %g3
3987 sub %g2, %g3, %g3 4006 sub %g2, %g3, %g3
3988 cmp %g3, %g1 4007 cmp %g3, %g1
3989 bgu 1f 4008 bgu 1f
3990 set _C_LABEL(intristk), %g1 4009 set _C_LABEL(intristk), %g1
3991 lduw [%g1], %g2 4010 lduw [%g1], %g2
3992 inc %g2 4011 inc %g2
3993 stw %g2, [%g1] 4012 stw %g2, [%g1]
39941: 40131:
3995#endif 4014#endif
3996 INTR_SETUP(-CC64FSZ-TF_SIZE) 4015 INTR_SETUP(-CC64FSZ-TF_SIZE)
3997 ! Switch to normal globals so we can save them 4016 ! Switch to normal globals so we can save them
3998 wrpr %g0, PSTATE_KERN, %pstate 4017 wrpr %g0, PSTATE_KERN, %pstate
3999 stx %g1, [%sp + CC64FSZ + STKB + TF_G + ( 1*8)] 4018 stx %g1, [%sp + CC64FSZ + STKB + TF_G + ( 1*8)]
4000 stx %g2, [%sp + CC64FSZ + STKB + TF_G + ( 2*8)] 4019 stx %g2, [%sp + CC64FSZ + STKB + TF_G + ( 2*8)]
4001 stx %g3, [%sp + CC64FSZ + STKB + TF_G + ( 3*8)] 4020 stx %g3, [%sp + CC64FSZ + STKB + TF_G + ( 3*8)]
4002 stx %g4, [%sp + CC64FSZ + STKB + TF_G + ( 4*8)] 4021 stx %g4, [%sp + CC64FSZ + STKB + TF_G + ( 4*8)]
4003 stx %g5, [%sp + CC64FSZ + STKB + TF_G + ( 5*8)] 4022 stx %g5, [%sp + CC64FSZ + STKB + TF_G + ( 5*8)]
4004 stx %g6, [%sp + CC64FSZ + STKB + TF_G + ( 6*8)] 4023 stx %g6, [%sp + CC64FSZ + STKB + TF_G + ( 6*8)]
4005 stx %g7, [%sp + CC64FSZ + STKB + TF_G + ( 7*8)] 4024 stx %g7, [%sp + CC64FSZ + STKB + TF_G + ( 7*8)]
4006 4025
4007 /* 4026 /*
4008 * In the EMBEDANY memory model %g4 points to the start of the 4027 * In the EMBEDANY memory model %g4 points to the start of the
4009 * data segment. In our case we need to clear it before calling 4028 * data segment. In our case we need to clear it before calling
4010 * any C-code. 4029 * any C-code.
4011 */ 4030 */
4012 clr %g4 4031 clr %g4
4013 4032
4014 flushw ! Do not remove this insn -- causes interrupt loss 4033 flushw ! Do not remove this insn -- causes interrupt loss
4015 rd %y, %l6 4034 rd %y, %l6
4016 INCR(_C_LABEL(uvmexp)+V_INTR) ! cnt.v_intr++; (clobbers %o0,%o1,%o2) 4035 INCR(_C_LABEL(uvmexp)+V_INTR) ! cnt.v_intr++; (clobbers %o0,%o1,%o2)
4017 rdpr %tt, %l5 ! Find out our current IPL 4036 rdpr %tt, %l5 ! Find out our current IPL
4018 rdpr %tstate, %l0 4037 rdpr %tstate, %l0
4019 rdpr %tpc, %l1 4038 rdpr %tpc, %l1
4020 rdpr %tnpc, %l2 4039 rdpr %tnpc, %l2
4021 rdpr %tl, %l3 ! Dump our trap frame now we have taken the IRQ 4040 rdpr %tl, %l3 ! Dump our trap frame now we have taken the IRQ
4022 stw %l6, [%sp + CC64FSZ + STKB + TF_Y] ! Silly, but we need to save this for rft 4041 stw %l6, [%sp + CC64FSZ + STKB + TF_Y] ! Silly, but we need to save this for rft
4023 dec %l3 4042 dec %l3
4024 CHKPT(%l4,%l7,0x26) 4043 CHKPT(%l4,%l7,0x26)
4025 wrpr %g0, %l3, %tl 4044 wrpr %g0, %l3, %tl
4026 sth %l5, [%sp + CC64FSZ + STKB + TF_TT]! debug 4045 sth %l5, [%sp + CC64FSZ + STKB + TF_TT]! debug
4027 stx %l0, [%sp + CC64FSZ + STKB + TF_TSTATE] ! set up intrframe/clockframe 4046 stx %l0, [%sp + CC64FSZ + STKB + TF_TSTATE] ! set up intrframe/clockframe
4028 stx %l1, [%sp + CC64FSZ + STKB + TF_PC] 4047 stx %l1, [%sp + CC64FSZ + STKB + TF_PC]
4029 btst TSTATE_PRIV, %l0 ! User mode? 4048 btst TSTATE_PRIV, %l0 ! User mode?
4030 stx %l2, [%sp + CC64FSZ + STKB + TF_NPC] 4049 stx %l2, [%sp + CC64FSZ + STKB + TF_NPC]
4031  4050
4032 sub %l5, 0x40, %l6 ! Convert to interrupt level 4051 sub %l5, 0x40, %l6 ! Convert to interrupt level
4033 sethi %hi(_C_LABEL(intr_evcnts)), %l4 4052 sethi %hi(_C_LABEL(intr_evcnts)), %l4
4034 stb %l6, [%sp + CC64FSZ + STKB + TF_PIL] ! set up intrframe/clockframe 4053 stb %l6, [%sp + CC64FSZ + STKB + TF_PIL] ! set up intrframe/clockframe
4035 rdpr %pil, %o1 4054 rdpr %pil, %o1
4036 mulx %l6, EVC_SIZE, %l3 4055 mulx %l6, EVC_SIZE, %l3
4037 or %l4, %lo(_C_LABEL(intr_evcnts)), %l4 ! intrcnt[intlev]++; 4056 or %l4, %lo(_C_LABEL(intr_evcnts)), %l4 ! intrcnt[intlev]++;
4038 stb %o1, [%sp + CC64FSZ + STKB + TF_OLDPIL] ! old %pil 4057 stb %o1, [%sp + CC64FSZ + STKB + TF_OLDPIL] ! old %pil
4039 ldx [%l4 + %l3], %o0 4058 ldx [%l4 + %l3], %o0
4040 add %l4, %l3, %l4 4059 add %l4, %l3, %l4
4041 clr %l5 ! Zero handled count 4060 clr %l5 ! Zero handled count
4042#ifdef MULTIPROCESSOR 4061#ifdef MULTIPROCESSOR
4043 mov 1, %l3 ! Ack softint 4062 mov 1, %l3 ! Ack softint
40441: add %o0, 1, %l7 40631: add %o0, 1, %l7
4045 casxa [%l4] ASI_N, %o0, %l7 4064 casxa [%l4] ASI_N, %o0, %l7
4046 cmp %o0, %l7 4065 cmp %o0, %l7
4047 bne,a,pn %xcc, 1b ! retry if changed 4066 bne,a,pn %xcc, 1b ! retry if changed
4048 mov %l7, %o0 4067 mov %l7, %o0
4049#else 4068#else
4050 inc %o0  4069 inc %o0
4051 mov 1, %l3 ! Ack softint 4070 mov 1, %l3 ! Ack softint
4052 stx %o0, [%l4] 4071 stx %o0, [%l4]
4053#endif 4072#endif
4054 sll %l3, %l6, %l3 ! Generate IRQ mask 4073 sll %l3, %l6, %l3 ! Generate IRQ mask
4055  4074
4056 wrpr %l6, %pil 4075 wrpr %l6, %pil
4057 4076
4058sparc_intr_retry: 4077sparc_intr_retry:
4059 wr %l3, 0, CLEAR_SOFTINT ! (don't clear possible %tick IRQ) 4078 wr %l3, 0, CLEAR_SOFTINT ! (don't clear possible %tick IRQ)
4060 sethi %hi(CPUINFO_VA+CI_INTRPENDING), %l4 4079 sethi %hi(CPUINFO_VA+CI_INTRPENDING), %l4
4061 sll %l6, PTRSHFT, %l2 4080 sll %l6, PTRSHFT, %l2
4062 or %l4, %lo(CPUINFO_VA+CI_INTRPENDING), %l4 4081 or %l4, %lo(CPUINFO_VA+CI_INTRPENDING), %l4
4063 add %l2, %l4, %l4 4082 add %l2, %l4, %l4
4064 4083
40651: 40841:
4066 membar #StoreLoad ! Make sure any failed casxa insns complete 4085 membar #StoreLoad ! Make sure any failed casxa insns complete
4067 LDPTR [%l4], %l2 ! Check a slot 4086 LDPTR [%l4], %l2 ! Check a slot
4068 cmp %l2, -1 4087 cmp %l2, -1
4069 beq,pn CCCR, intrcmplt ! Empty list? 4088 beq,pn CCCR, intrcmplt ! Empty list?
4070 mov -1, %l7 4089 mov -1, %l7
4071 membar #LoadStore 4090 membar #LoadStore
4072 CASPTR [%l4] ASI_N, %l2, %l7 ! Grab the entire list 4091 CASPTR [%l4] ASI_N, %l2, %l7 ! Grab the entire list
4073 cmp %l7, %l2 4092 cmp %l7, %l2
4074 bne,pn CCCR, 1b 4093 bne,pn CCCR, 1b
4075 EMPTY 4094 EMPTY
40762: 40952:
4077 add %sp, CC64FSZ+STKB, %o2 ! tf = %sp + CC64FSZ + STKB 4096 add %sp, CC64FSZ+STKB, %o2 ! tf = %sp + CC64FSZ + STKB
4078 LDPTR [%l2 + IH_PEND], %l7 ! save ih->ih_pending 4097 LDPTR [%l2 + IH_PEND], %l7 ! save ih->ih_pending
4079 membar #LoadStore 4098 membar #LoadStore
4080 STPTR %g0, [%l2 + IH_PEND] ! Clear pending flag 4099 STPTR %g0, [%l2 + IH_PEND] ! Clear pending flag
4081 membar #Sync 4100 membar #Sync
4082 LDPTR [%l2 + IH_FUN], %o4 ! ih->ih_fun 4101 LDPTR [%l2 + IH_FUN], %o4 ! ih->ih_fun
4083 LDPTR [%l2 + IH_ARG], %o0 ! ih->ih_arg 4102 LDPTR [%l2 + IH_ARG], %o0 ! ih->ih_arg
4084 4103
4085#ifdef NOT_DEBUG 4104#ifdef NOT_DEBUG
4086 set _C_LABEL(intrdebug), %o3 4105 set _C_LABEL(intrdebug), %o3
4087 ld [%o2], %o3 4106 ld [%o2], %o3
4088 btst INTRDEBUG_FUNC, %o3 4107 btst INTRDEBUG_FUNC, %o3
4089 bz,a,pt %icc, 97f 4108 bz,a,pt %icc, 97f
4090 nop 4109 nop
4091 4110
4092 cmp %l6, 0xa ! ignore clock interrupts? 4111 cmp %l6, 0xa ! ignore clock interrupts?
4093 bz,pt %icc, 97f 4112 bz,pt %icc, 97f
4094 nop 4113 nop
4095 4114
4096 STACKFRAME(-CC64FSZ) ! Get a clean register window 4115 STACKFRAME(-CC64FSZ) ! Get a clean register window
4097 LOAD_ASCIZ(%o0, "sparc_interrupt: func %p arg %p\r\n") 4116 LOAD_ASCIZ(%o0, "sparc_interrupt: func %p arg %p\r\n")
4098 mov %i0, %o2 ! arg 4117 mov %i0, %o2 ! arg
4099 GLOBTOLOC 4118 GLOBTOLOC
4100 call prom_printf 4119 call prom_printf
4101 mov %i4, %o1 ! func 4120 mov %i4, %o1 ! func
4102 LOCTOGLOB 4121 LOCTOGLOB
4103 restore 4122 restore
410497: 412397:
4105 mov %l4, %o1 4124 mov %l4, %o1
4106#endif 4125#endif
4107 4126
4108 wrpr %g0, PSTATE_INTR, %pstate ! Reenable interrupts 4127 wrpr %g0, PSTATE_INTR, %pstate ! Reenable interrupts
4109 jmpl %o4, %o7 ! handled = (*ih->ih_fun)(...) 4128 jmpl %o4, %o7 ! handled = (*ih->ih_fun)(...)
4110 movrz %o0, %o2, %o0 ! arg = (arg == 0) ? arg : tf 4129 movrz %o0, %o2, %o0 ! arg = (arg == 0) ? arg : tf
4111 wrpr %g0, PSTATE_KERN, %pstate ! Disable interrupts 4130 wrpr %g0, PSTATE_KERN, %pstate ! Disable interrupts
4112 LDPTR [%l2 + IH_CLR], %l1 4131 LDPTR [%l2 + IH_CLR], %l1
4113 membar #Sync 4132 membar #Sync
4114 4133
4115 brz,pn %l1, 0f 4134 brz,pn %l1, 0f
4116 add %l5, %o0, %l5 4135 add %l5, %o0, %l5
4117#ifdef SCHIZO_BUS_SPACE_BROKEN  4136#ifdef SCHIZO_BUS_SPACE_BROKEN
4118 stxa %g0, [%l1] ASI_PHYS_NON_CACHED ! Clear intr source 4137 stxa %g0, [%l1] ASI_PHYS_NON_CACHED ! Clear intr source
4119#else 4138#else
4120 stx %g0, [%l1] ! Clear intr source 4139 stx %g0, [%l1] ! Clear intr source
4121#endif 4140#endif
4122 membar #Sync ! Should not be needed 4141 membar #Sync ! Should not be needed
41230: 41420:
4124 cmp %l7, -1 4143 cmp %l7, -1
4125 bne,pn CCCR, 2b ! 'Nother? 4144 bne,pn CCCR, 2b ! 'Nother?
4126 mov %l7, %l2 4145 mov %l7, %l2
4127 4146
4128intrcmplt: 4147intrcmplt:
4129 /* 4148 /*
4130 * Re-read SOFTINT to see if any new pending interrupts 4149 * Re-read SOFTINT to see if any new pending interrupts
4131 * at this level. 4150 * at this level.
4132 */ 4151 */
4133 mov 1, %l3 ! Ack softint 4152 mov 1, %l3 ! Ack softint
4134 rd SOFTINT, %l7 ! %l5 contains #intr handled. 4153 rd SOFTINT, %l7 ! %l5 contains #intr handled.
4135 sll %l3, %l6, %l3 ! Generate IRQ mask 4154 sll %l3, %l6, %l3 ! Generate IRQ mask
4136 btst %l3, %l7 ! leave mask in %l3 for retry code 4155 btst %l3, %l7 ! leave mask in %l3 for retry code
4137 bnz,pn %icc, sparc_intr_retry 4156 bnz,pn %icc, sparc_intr_retry
4138 mov 1, %l5 ! initialize intr count for next run 4157 mov 1, %l5 ! initialize intr count for next run
4139 4158
4140 ! Decrement this cpu's interrupt depth 4159 ! Decrement this cpu's interrupt depth
4141 sethi %hi(CPUINFO_VA+CI_IDEPTH), %l4 4160 sethi %hi(CPUINFO_VA+CI_IDEPTH), %l4
4142 ld [%l4 + %lo(CPUINFO_VA+CI_IDEPTH)], %l5 4161 ld [%l4 + %lo(CPUINFO_VA+CI_IDEPTH)], %l5
4143 dec %l5 4162 dec %l5
4144 st %l5, [%l4 + %lo(CPUINFO_VA+CI_IDEPTH)] 4163 st %l5, [%l4 + %lo(CPUINFO_VA+CI_IDEPTH)]
4145 4164
4146#ifdef NOT_DEBUG 4165#ifdef NOT_DEBUG
4147 set _C_LABEL(intrdebug), %o2 4166 set _C_LABEL(intrdebug), %o2
4148 ld [%o2], %o2 4167 ld [%o2], %o2
4149 btst INTRDEBUG_FUNC, %o2 4168 btst INTRDEBUG_FUNC, %o2
4150 bz,a,pt %icc, 97f 4169 bz,a,pt %icc, 97f
4151 nop 4170 nop
4152 4171
4153 cmp %l6, 0xa ! ignore clock interrupts? 4172 cmp %l6, 0xa ! ignore clock interrupts?
4154 bz,pt %icc, 97f 4173 bz,pt %icc, 97f
4155 nop 4174 nop
4156 4175
4157 STACKFRAME(-CC64FSZ) ! Get a clean register window 4176 STACKFRAME(-CC64FSZ) ! Get a clean register window
4158 LOAD_ASCIZ(%o0, "sparc_interrupt: done\r\n") 4177 LOAD_ASCIZ(%o0, "sparc_interrupt: done\r\n")
4159 GLOBTOLOC 4178 GLOBTOLOC
4160 call prom_printf 4179 call prom_printf
4161 nop 4180 nop
4162 LOCTOGLOB 4181 LOCTOGLOB
4163 restore 4182 restore
416497: 418397:
4165#endif 4184#endif
4166 4185
4167 ldub [%sp + CC64FSZ + STKB + TF_OLDPIL], %l3 ! restore old %pil 4186 ldub [%sp + CC64FSZ + STKB + TF_OLDPIL], %l3 ! restore old %pil
4168 wrpr %l3, 0, %pil 4187 wrpr %l3, 0, %pil
4169 4188
4170 CHKPT(%o1,%o2,5) 4189 CHKPT(%o1,%o2,5)
4171 ba,a,pt %icc, return_from_trap 4190 ba,a,pt %icc, return_from_trap
4172 nop 4191 nop
4173 4192
4174#ifdef notyet 4193#ifdef notyet
4175/* 4194/*
4176 * Level 12 (ZS serial) interrupt. Handle it quickly, schedule a 4195 * Level 12 (ZS serial) interrupt. Handle it quickly, schedule a
4177 * software interrupt, and get out. Do the software interrupt directly 4196 * software interrupt, and get out. Do the software interrupt directly
4178 * if we would just take it on the way out. 4197 * if we would just take it on the way out.
4179 * 4198 *
4180 * Input: 4199 * Input:
4181 * %l0 = %psr 4200 * %l0 = %psr
4182 * %l1 = return pc 4201 * %l1 = return pc
4183 * %l2 = return npc 4202 * %l2 = return npc
4184 * Internal: 4203 * Internal:
4185 * %l3 = zs device 4204 * %l3 = zs device
4186 * %l4, %l5 = temporary 4205 * %l4, %l5 = temporary
4187 * %l6 = rr3 (or temporary data) + 0x100 => need soft int 4206 * %l6 = rr3 (or temporary data) + 0x100 => need soft int
4188 * %l7 = zs soft status 4207 * %l7 = zs soft status
4189 */ 4208 */
4190zshard: 4209zshard:
4191#endif /* notyet */ 4210#endif /* notyet */
4192 4211
4193 .globl return_from_trap, rft_kernel, rft_user 4212 .globl return_from_trap, rft_kernel, rft_user
4194 .globl softtrap, slowtrap 4213 .globl softtrap, slowtrap
4195 4214
4196/* 4215/*
4197 * Various return-from-trap routines (see return_from_trap). 4216 * Various return-from-trap routines (see return_from_trap).
4198 */ 4217 */
4199 4218
4200/* 4219/*
4201 * Return from trap. 4220 * Return from trap.
4202 * registers are: 4221 * registers are:
4203 * 4222 *
4204 * [%sp + CC64FSZ + STKB] => trap frame 4223 * [%sp + CC64FSZ + STKB] => trap frame
4205 * 4224 *
4206 * We must load all global, out, and trap registers from the trap frame. 4225 * We must load all global, out, and trap registers from the trap frame.
4207 * 4226 *
4208 * If returning to kernel, we should be at the proper trap level because 4227 * If returning to kernel, we should be at the proper trap level because
4209 * we don't touch %tl. 4228 * we don't touch %tl.
4210 * 4229 *
4211 * When returning to user mode, the trap level does not matter, as it 4230 * When returning to user mode, the trap level does not matter, as it
4212 * will be set explicitly. 4231 * will be set explicitly.
4213 * 4232 *
4214 * If we are returning to user code, we must: 4233 * If we are returning to user code, we must:
4215 * 1. Check for register windows in the pcb that belong on the stack. 4234 * 1. Check for register windows in the pcb that belong on the stack.
4216 * If there are any, reload them 4235 * If there are any, reload them
4217 */ 4236 */
4218return_from_trap: 4237return_from_trap:
4219#ifdef DEBUG 4238#ifdef DEBUG
4220 !! Make sure we don't have pc == npc == 0 or we suck. 4239 !! Make sure we don't have pc == npc == 0 or we suck.
4221 ldx [%sp + CC64FSZ + STKB + TF_PC], %g2 4240 ldx [%sp + CC64FSZ + STKB + TF_PC], %g2
4222 ldx [%sp + CC64FSZ + STKB + TF_NPC], %g3 4241 ldx [%sp + CC64FSZ + STKB + TF_NPC], %g3
4223 orcc %g2, %g3, %g0 4242 orcc %g2, %g3, %g0
4224 tz %icc, 1 4243 tz %icc, 1
4225#endif 4244#endif
4226 4245
4227#if KTR_COMPILE & KTR_TRAP 4246#if KTR_COMPILE & KTR_TRAP
4228 CATR(KTR_TRAP, "rft: sp=%p pc=%p npc=%p tstate=%p", 4247 CATR(KTR_TRAP, "rft: sp=%p pc=%p npc=%p tstate=%p",
4229 %g2, %g3, %g4, 10, 11, 12) 4248 %g2, %g3, %g4, 10, 11, 12)
4230 stx %i6, [%g2 + KTR_PARM1] 4249 stx %i6, [%g2 + KTR_PARM1]
4231 ldx [%sp + CC64FSZ + STKB + TF_PC], %g3 4250 ldx [%sp + CC64FSZ + STKB + TF_PC], %g3
4232 stx %g3, [%g2 + KTR_PARM2] 4251 stx %g3, [%g2 + KTR_PARM2]
4233 ldx [%sp + CC64FSZ + STKB + TF_NPC], %g3 4252 ldx [%sp + CC64FSZ + STKB + TF_NPC], %g3
4234 stx %g3, [%g2 + KTR_PARM3] 4253 stx %g3, [%g2 + KTR_PARM3]
4235 ldx [%sp + CC64FSZ + STKB + TF_TSTATE], %g3 4254 ldx [%sp + CC64FSZ + STKB + TF_TSTATE], %g3
4236 stx %g3, [%g2 + KTR_PARM4] 4255 stx %g3, [%g2 + KTR_PARM4]
423712: 425612:
4238#endif 4257#endif
4239 4258
4240 !! 4259 !!
4241 !! We'll make sure we flush our pcb here, rather than later. 4260 !! We'll make sure we flush our pcb here, rather than later.
4242 !! 4261 !!
4243 ldx [%sp + CC64FSZ + STKB + TF_TSTATE], %g1 4262 ldx [%sp + CC64FSZ + STKB + TF_TSTATE], %g1
4244 btst TSTATE_PRIV, %g1 ! returning to userland? 4263 btst TSTATE_PRIV, %g1 ! returning to userland?
4245 4264
4246 !! 4265 !!
4247 !! Let all pending interrupts drain before returning to userland 4266 !! Let all pending interrupts drain before returning to userland
4248 !! 4267 !!
4249 bnz,pn %icc, 1f ! Returning to userland? 4268 bnz,pn %icc, 1f ! Returning to userland?
4250 nop 4269 nop
4251 wrpr %g0, PSTATE_INTR, %pstate 4270 wrpr %g0, PSTATE_INTR, %pstate
4252 wrpr %g0, %g0, %pil ! Lower IPL 4271 wrpr %g0, %g0, %pil ! Lower IPL
42531: 42721:
4254 wrpr %g0, PSTATE_KERN, %pstate ! Make sure we have normal globals & no IRQs 4273 wrpr %g0, PSTATE_KERN, %pstate ! Make sure we have normal globals & no IRQs
4255 4274
4256 /* Restore normal globals */ 4275 /* Restore normal globals */
4257 ldx [%sp + CC64FSZ + STKB + TF_G + (1*8)], %g1 4276 ldx [%sp + CC64FSZ + STKB + TF_G + (1*8)], %g1
4258 ldx [%sp + CC64FSZ + STKB + TF_G + (2*8)], %g2 4277 ldx [%sp + CC64FSZ + STKB + TF_G + (2*8)], %g2
4259 ldx [%sp + CC64FSZ + STKB + TF_G + (3*8)], %g3 4278 ldx [%sp + CC64FSZ + STKB + TF_G + (3*8)], %g3
4260 ldx [%sp + CC64FSZ + STKB + TF_G + (4*8)], %g4 4279 ldx [%sp + CC64FSZ + STKB + TF_G + (4*8)], %g4
4261 ldx [%sp + CC64FSZ + STKB + TF_G + (5*8)], %g5 4280 ldx [%sp + CC64FSZ + STKB + TF_G + (5*8)], %g5
4262 ldx [%sp + CC64FSZ + STKB + TF_G + (6*8)], %g6 4281 ldx [%sp + CC64FSZ + STKB + TF_G + (6*8)], %g6
4263 ldx [%sp + CC64FSZ + STKB + TF_G + (7*8)], %g7 4282 ldx [%sp + CC64FSZ + STKB + TF_G + (7*8)], %g7
4264 /* Switch to alternate globals and load outs */ 4283 /* Switch to alternate globals and load outs */
4265 wrpr %g0, PSTATE_KERN|PSTATE_AG, %pstate 4284 wrpr %g0, PSTATE_KERN|PSTATE_AG, %pstate
4266#ifdef TRAPS_USE_IG 4285#ifdef TRAPS_USE_IG
4267 wrpr %g0, PSTATE_KERN|PSTATE_IG, %pstate ! DEBUG 4286 wrpr %g0, PSTATE_KERN|PSTATE_IG, %pstate ! DEBUG
4268#endif 4287#endif
4269 ldx [%sp + CC64FSZ + STKB + TF_O + (0*8)], %i0 4288 ldx [%sp + CC64FSZ + STKB + TF_O + (0*8)], %i0
4270 ldx [%sp + CC64FSZ + STKB + TF_O + (1*8)], %i1 4289 ldx [%sp + CC64FSZ + STKB + TF_O + (1*8)], %i1
4271 ldx [%sp + CC64FSZ + STKB + TF_O + (2*8)], %i2 4290 ldx [%sp + CC64FSZ + STKB + TF_O + (2*8)], %i2
4272 ldx [%sp + CC64FSZ + STKB + TF_O + (3*8)], %i3 4291 ldx [%sp + CC64FSZ + STKB + TF_O + (3*8)], %i3
4273 ldx [%sp + CC64FSZ + STKB + TF_O + (4*8)], %i4 4292 ldx [%sp + CC64FSZ + STKB + TF_O + (4*8)], %i4
4274 ldx [%sp + CC64FSZ + STKB + TF_O + (5*8)], %i5 4293 ldx [%sp + CC64FSZ + STKB + TF_O + (5*8)], %i5
4275 ldx [%sp + CC64FSZ + STKB + TF_O + (6*8)], %i6 4294 ldx [%sp + CC64FSZ + STKB + TF_O + (6*8)], %i6
4276 ldx [%sp + CC64FSZ + STKB + TF_O + (7*8)], %i7 4295 ldx [%sp + CC64FSZ + STKB + TF_O + (7*8)], %i7
4277 /* Now load trap registers into alternate globals */ 4296 /* Now load trap registers into alternate globals */
4278 ld [%sp + CC64FSZ + STKB + TF_Y], %g4 4297 ld [%sp + CC64FSZ + STKB + TF_Y], %g4
4279 ldx [%sp + CC64FSZ + STKB + TF_TSTATE], %g1 ! load new values 4298 ldx [%sp + CC64FSZ + STKB + TF_TSTATE], %g1 ! load new values
4280 wr %g4, 0, %y 4299 wr %g4, 0, %y
4281 ldx [%sp + CC64FSZ + STKB + TF_PC], %g2 4300 ldx [%sp + CC64FSZ + STKB + TF_PC], %g2
4282 ldx [%sp + CC64FSZ + STKB + TF_NPC], %g3 4301 ldx [%sp + CC64FSZ + STKB + TF_NPC], %g3
4283 4302
4284#ifdef NOTDEF_DEBUG 4303#ifdef NOTDEF_DEBUG
4285 ldub [%sp + CC64FSZ + STKB + TF_PIL], %g5 ! restore %pil 4304 ldub [%sp + CC64FSZ + STKB + TF_PIL], %g5 ! restore %pil
4286 wrpr %g5, %pil ! DEBUG 4305 wrpr %g5, %pil ! DEBUG
4287#endif 4306#endif
4288 4307
4289 /* Returning to user mode or kernel mode? */ 4308 /* Returning to user mode or kernel mode? */
4290 btst TSTATE_PRIV, %g1 ! returning to userland? 4309 btst TSTATE_PRIV, %g1 ! returning to userland?
4291 CHKPT(%g4, %g7, 6) 4310 CHKPT(%g4, %g7, 6)
4292 bz,pt %icc, rft_user 4311 bz,pt %icc, rft_user
4293 sethi %hi(CPUINFO_VA+CI_WANT_AST), %g7 ! first instr of rft_user 4312 sethi %hi(CPUINFO_VA+CI_WANT_AST), %g7 ! first instr of rft_user
4294 4313
4295/* 4314/*
4296 * Return from trap, to kernel. 4315 * Return from trap, to kernel.
4297 * 4316 *
4298 * We will assume, for the moment, that all kernel traps are properly stacked 4317 * We will assume, for the moment, that all kernel traps are properly stacked
4299 * in the trap registers, so all we have to do is insert the (possibly modified) 4318 * in the trap registers, so all we have to do is insert the (possibly modified)
4300 * register values into the trap registers then do a retry. 4319 * register values into the trap registers then do a retry.
4301 * 4320 *
4302 */ 4321 */
4303rft_kernel: 4322rft_kernel:
4304 rdpr %tl, %g4 ! Grab a set of trap registers 4323 rdpr %tl, %g4 ! Grab a set of trap registers
4305 inc %g4 4324 inc %g4
4306 wrpr %g4, %g0, %tl 4325 wrpr %g4, %g0, %tl
4307 wrpr %g3, 0, %tnpc 4326 wrpr %g3, 0, %tnpc
4308 wrpr %g2, 0, %tpc 4327 wrpr %g2, 0, %tpc
4309 wrpr %g1, 0, %tstate 4328 wrpr %g1, 0, %tstate
4310 CHKPT(%g1,%g2,7) 4329 CHKPT(%g1,%g2,7)
4311 restore 4330 restore
4312 CHKPT(%g1,%g2,0) ! Clear this out 4331 CHKPT(%g1,%g2,0) ! Clear this out
4313 rdpr %tstate, %g1 ! Since we may have trapped our regs may be toast 4332 rdpr %tstate, %g1 ! Since we may have trapped our regs may be toast
4314 rdpr %cwp, %g2 4333 rdpr %cwp, %g2
4315 andn %g1, CWP, %g1 4334 andn %g1, CWP, %g1
4316 wrpr %g1, %g2, %tstate ! Put %cwp in %tstate 4335 wrpr %g1, %g2, %tstate ! Put %cwp in %tstate
4317 CLRTT 4336 CLRTT
4318#ifdef TRAPSTATS 4337#ifdef TRAPSTATS
4319 rdpr %tl, %g2 4338 rdpr %tl, %g2
4320 set _C_LABEL(rftkcnt), %g1 4339 set _C_LABEL(rftkcnt), %g1
4321 sllx %g2, 2, %g2 4340 sllx %g2, 2, %g2
4322 add %g1, %g2, %g1 4341 add %g1, %g2, %g1
4323 lduw [%g1], %g2 4342 lduw [%g1], %g2
4324 inc %g2 4343 inc %g2
4325 stw %g2, [%g1] 4344 stw %g2, [%g1]
4326#endif 4345#endif
4327#if 0 4346#if 0
4328 wrpr %g0, 0, %cleanwin ! DEBUG 4347 wrpr %g0, 0, %cleanwin ! DEBUG
4329#endif 4348#endif
4330#if defined(DDB) && defined(MULTIPROCESSOR) 4349#if defined(DDB) && defined(MULTIPROCESSOR)
4331 set sparc64_ipi_pause_trap_point, %g1 4350 set sparc64_ipi_pause_trap_point, %g1
4332 rdpr %tpc, %g2 4351 rdpr %tpc, %g2
4333 cmp %g1, %g2 4352 cmp %g1, %g2
4334 bne,pt %icc, 0f 4353 bne,pt %icc, 0f
4335 nop 4354 nop
4336 done 4355 done
43370: 43560:
4338#endif 4357#endif
4339 retry 4358 retry
4340 NOTREACHED 4359 NOTREACHED
4341/* 4360/*
4342 * Return from trap, to user. Checks for scheduling trap (`ast') first; 4361 * Return from trap, to user. Checks for scheduling trap (`ast') first;
4343 * will re-enter trap() if set. Note that we may have to switch from 4362 * will re-enter trap() if set. Note that we may have to switch from
4344 * the interrupt stack to the kernel stack in this case. 4363 * the interrupt stack to the kernel stack in this case.
4345 * %g1 = %tstate 4364 * %g1 = %tstate
4346 * %g2 = return %pc 4365 * %g2 = return %pc
4347 * %g3 = return %npc 4366 * %g3 = return %npc
4348 * If returning to a valid window, just set psr and return. 4367 * If returning to a valid window, just set psr and return.
4349 */ 4368 */
4350 .data 4369 .data
4351rft_wcnt: .word 0 4370rft_wcnt: .word 0
4352 .text 4371 .text
4353 4372
4354rft_user: 4373rft_user:
4355! sethi %hi(CPUINFO_VA+CI_WANT_AST), %g7 ! (done above) 4374! sethi %hi(CPUINFO_VA+CI_WANT_AST), %g7 ! (done above)
4356 lduw [%g7 + %lo(CPUINFO_VA+CI_WANT_AST)], %g7! want AST trap? 4375 lduw [%g7 + %lo(CPUINFO_VA+CI_WANT_AST)], %g7! want AST trap?
4357 brnz,pn %g7, softtrap ! yes, re-enter trap with type T_AST 4376 brnz,pn %g7, softtrap ! yes, re-enter trap with type T_AST
4358 mov T_AST, %g4 4377 mov T_AST, %g4
4359 4378
4360 CHKPT(%g4,%g7,8) 4379 CHKPT(%g4,%g7,8)
4361#ifdef NOTDEF_DEBUG 4380#ifdef NOTDEF_DEBUG
4362 sethi %hi(CPCB), %g4 4381 sethi %hi(CPCB), %g4
4363 LDPTR [%g4 + %lo(CPCB)], %g4 4382 LDPTR [%g4 + %lo(CPCB)], %g4
4364 ldub [%g4 + PCB_NSAVED], %g4 ! nsaved 4383 ldub [%g4 + PCB_NSAVED], %g4 ! nsaved
4365 brz,pt %g4, 2f ! Only print if nsaved <> 0 4384 brz,pt %g4, 2f ! Only print if nsaved <> 0
4366 nop 4385 nop
4367 4386
4368 set 1f, %o0 4387 set 1f, %o0
4369 mov %g4, %o1 4388 mov %g4, %o1
4370 mov %g2, %o2 ! pc 4389 mov %g2, %o2 ! pc
4371 wr %g0, ASI_DMMU, %asi ! restore the user context 4390 wr %g0, ASI_DMMU, %asi ! restore the user context
4372 ldxa [CTX_SECONDARY] %asi, %o3 ! ctx 4391 ldxa [CTX_SECONDARY] %asi, %o3 ! ctx
4373 GLOBTOLOC 4392 GLOBTOLOC
4374 mov %g3, %o5 4393 mov %g3, %o5
4375 call printf 4394 call printf
4376 mov %i6, %o4 ! sp 4395 mov %i6, %o4 ! sp
4377! wrpr %g0, PSTATE_INTR, %pstate ! Allow IRQ service 4396! wrpr %g0, PSTATE_INTR, %pstate ! Allow IRQ service
4378! wrpr %g0, PSTATE_KERN, %pstate ! DenyIRQ service 4397! wrpr %g0, PSTATE_KERN, %pstate ! DenyIRQ service
4379 LOCTOGLOB 4398 LOCTOGLOB
43801: 43991:
4381 .data 4400 .data
4382 .asciz "rft_user: nsaved=%x pc=%d ctx=%x sp=%x npc=%p\n" 4401 .asciz "rft_user: nsaved=%x pc=%d ctx=%x sp=%x npc=%p\n"
4383 _ALIGN 4402 _ALIGN
4384 .text 4403 .text
4385#endif 4404#endif
4386 4405
4387 /* 4406 /*
4388 * NB: only need to do this after a cache miss 4407 * NB: only need to do this after a cache miss
4389 */ 4408 */
4390#ifdef TRAPSTATS 4409#ifdef TRAPSTATS
4391 set _C_LABEL(rftucnt), %g6 4410 set _C_LABEL(rftucnt), %g6
4392 lduw [%g6], %g7 4411 lduw [%g6], %g7
4393 inc %g7 4412 inc %g7
4394 stw %g7, [%g6] 4413 stw %g7, [%g6]
4395#endif 4414#endif
4396 /* 4415 /*
4397 * Now check to see if any regs are saved in the pcb and restore them. 4416 * Now check to see if any regs are saved in the pcb and restore them.
4398 * 4417 *
4399 * Here we need to undo the damage caused by switching to a kernel  4418 * Here we need to undo the damage caused by switching to a kernel
4400 * stack. 4419 * stack.
4401 * 4420 *
4402 * We will use alternate globals %g4..%g7 because %g1..%g3 are used 4421 * We will use alternate globals %g4..%g7 because %g1..%g3 are used
4403 * by the data fault trap handlers and we don't want possible conflict. 4422 * by the data fault trap handlers and we don't want possible conflict.
4404 */ 4423 */
4405 4424
4406 sethi %hi(CPCB), %g6 4425 sethi %hi(CPCB), %g6
4407 rdpr %otherwin, %g7 ! restore register window controls 4426 rdpr %otherwin, %g7 ! restore register window controls
4408#ifdef DEBUG 4427#ifdef DEBUG
4409 rdpr %canrestore, %g5 ! DEBUG 4428 rdpr %canrestore, %g5 ! DEBUG
4410 tst %g5 ! DEBUG 4429 tst %g5 ! DEBUG
4411 tnz %icc, 1; nop ! DEBUG 4430 tnz %icc, 1; nop ! DEBUG
4412! mov %g0, %g5 ! There should be *NO* %canrestore 4431! mov %g0, %g5 ! There should be *NO* %canrestore
4413 add %g7, %g5, %g7 ! DEBUG 4432 add %g7, %g5, %g7 ! DEBUG
4414#endif 4433#endif
4415 wrpr %g0, %g7, %canrestore 4434 wrpr %g0, %g7, %canrestore
4416 LDPTR [%g6 + %lo(CPCB)], %g6 4435 LDPTR [%g6 + %lo(CPCB)], %g6
4417 wrpr %g0, 0, %otherwin 4436 wrpr %g0, 0, %otherwin
4418 4437
4419 CHKPT(%g4,%g7,9) 4438 CHKPT(%g4,%g7,9)
4420 ldub [%g6 + PCB_NSAVED], %g7 ! Any saved reg windows? 4439 ldub [%g6 + PCB_NSAVED], %g7 ! Any saved reg windows?
4421 wrpr %g0, WSTATE_USER, %wstate ! Need to know where our sp points 4440 wrpr %g0, WSTATE_USER, %wstate ! Need to know where our sp points
4422 4441
4423#ifdef DEBUG 4442#ifdef DEBUG
4424 set rft_wcnt, %g4 ! Keep track of all the windows we restored 4443 set rft_wcnt, %g4 ! Keep track of all the windows we restored
4425 stw %g7, [%g4] 4444 stw %g7, [%g4]
4426#endif 4445#endif
4427 4446
4428 brz,pt %g7, 5f ! No saved reg wins 4447 brz,pt %g7, 5f ! No saved reg wins
4429 nop 4448 nop
4430 dec %g7 ! We can do this now or later. Move to last entry 4449 dec %g7 ! We can do this now or later. Move to last entry
4431 4450
4432#ifdef DEBUG 4451#ifdef DEBUG
4433 rdpr %canrestore, %g4 ! DEBUG Make sure we've restored everything 4452 rdpr %canrestore, %g4 ! DEBUG Make sure we've restored everything
4434 brnz,a,pn %g4, 0f ! DEBUG 4453 brnz,a,pn %g4, 0f ! DEBUG
4435 sir ! DEBUG we should NOT have any usable windows here 4454 sir ! DEBUG we should NOT have any usable windows here
44360: ! DEBUG 44550: ! DEBUG
4437 wrpr %g0, 5, %tl 4456 wrpr %g0, 5, %tl
4438#endif 4457#endif
4439 rdpr %otherwin, %g4 4458 rdpr %otherwin, %g4
4440 sll %g7, 7, %g5 ! calculate ptr into rw64 array 8*16 == 128 or 7 bits 4459 sll %g7, 7, %g5 ! calculate ptr into rw64 array 8*16 == 128 or 7 bits
4441 brz,pt %g4, 6f ! We should not have any user windows left 4460 brz,pt %g4, 6f ! We should not have any user windows left
4442 add %g5, %g6, %g5 4461 add %g5, %g6, %g5
4443 4462
4444 set 1f, %o0 4463 set 1f, %o0
4445 mov %g7, %o1 4464 mov %g7, %o1
4446 mov %g4, %o2 4465 mov %g4, %o2
4447 call printf 4466 call printf
4448 wrpr %g0, PSTATE_KERN, %pstate 4467 wrpr %g0, PSTATE_KERN, %pstate
4449 set 2f, %o0 4468 set 2f, %o0
4450 call panic 4469 call panic
4451 nop 4470 nop
4452 NOTREACHED 4471 NOTREACHED
4453 .data 4472 .data
44541: .asciz "pcb_nsaved=%x and otherwin=%x\n" 44731: .asciz "pcb_nsaved=%x and otherwin=%x\n"
44552: .asciz "rft_user\n" 44742: .asciz "rft_user\n"
4456 _ALIGN 4475 _ALIGN
4457 .text 4476 .text
44586: 44776:
44593: 44783:
4460 restored ! Load in the window 4479 restored ! Load in the window
4461 restore ! This should not trap! 4480 restore ! This should not trap!
4462 ldx [%g5 + PCB_RW + ( 0*8)], %l0 ! Load the window from the pcb 4481 ldx [%g5 + PCB_RW + ( 0*8)], %l0 ! Load the window from the pcb
4463 ldx [%g5 + PCB_RW + ( 1*8)], %l1 4482 ldx [%g5 + PCB_RW + ( 1*8)], %l1
4464 ldx [%g5 + PCB_RW + ( 2*8)], %l2 4483 ldx [%g5 + PCB_RW + ( 2*8)], %l2
4465 ldx [%g5 + PCB_RW + ( 3*8)], %l3 4484 ldx [%g5 + PCB_RW + ( 3*8)], %l3
4466 ldx [%g5 + PCB_RW + ( 4*8)], %l4 4485 ldx [%g5 + PCB_RW + ( 4*8)], %l4
4467 ldx [%g5 + PCB_RW + ( 5*8)], %l5 4486 ldx [%g5 + PCB_RW + ( 5*8)], %l5
4468 ldx [%g5 + PCB_RW + ( 6*8)], %l6 4487 ldx [%g5 + PCB_RW + ( 6*8)], %l6
4469 ldx [%g5 + PCB_RW + ( 7*8)], %l7 4488 ldx [%g5 + PCB_RW + ( 7*8)], %l7
4470 4489
4471 ldx [%g5 + PCB_RW + ( 8*8)], %i0 4490 ldx [%g5 + PCB_RW + ( 8*8)], %i0
4472 ldx [%g5 + PCB_RW + ( 9*8)], %i1 4491 ldx [%g5 + PCB_RW + ( 9*8)], %i1
4473 ldx [%g5 + PCB_RW + (10*8)], %i2 4492 ldx [%g5 + PCB_RW + (10*8)], %i2
4474 ldx [%g5 + PCB_RW + (11*8)], %i3 4493 ldx [%g5 + PCB_RW + (11*8)], %i3
4475 ldx [%g5 + PCB_RW + (12*8)], %i4 4494 ldx [%g5 + PCB_RW + (12*8)], %i4
4476 ldx [%g5 + PCB_RW + (13*8)], %i5 4495 ldx [%g5 + PCB_RW + (13*8)], %i5
4477 ldx [%g5 + PCB_RW + (14*8)], %i6 4496 ldx [%g5 + PCB_RW + (14*8)], %i6
4478 ldx [%g5 + PCB_RW + (15*8)], %i7 4497 ldx [%g5 + PCB_RW + (15*8)], %i7
4479 4498
4480#ifdef DEBUG 4499#ifdef DEBUG
4481 stx %g0, [%g5 + PCB_RW + (14*8)] ! DEBUG mark that we've saved this one 4500 stx %g0, [%g5 + PCB_RW + (14*8)] ! DEBUG mark that we've saved this one
4482#endif 4501#endif
4483 4502
4484 cmp %g5, %g6 4503 cmp %g5, %g6
4485 bgu,pt %xcc, 3b ! Next one? 4504 bgu,pt %xcc, 3b ! Next one?
4486 dec 8*16, %g5 4505 dec 8*16, %g5
4487 4506
4488 rdpr %ver, %g5 4507 rdpr %ver, %g5
4489 stb %g0, [%g6 + PCB_NSAVED] ! Clear them out so we won't do this again 4508 stb %g0, [%g6 + PCB_NSAVED] ! Clear them out so we won't do this again
4490 and %g5, CWP, %g5 4509 and %g5, CWP, %g5
4491 add %g5, %g7, %g4 4510 add %g5, %g7, %g4
4492 dec 1, %g5 ! NWINDOWS-1-1 4511 dec 1, %g5 ! NWINDOWS-1-1
4493 wrpr %g5, 0, %cansave 4512 wrpr %g5, 0, %cansave
4494 wrpr %g0, 0, %canrestore ! Make sure we have no freeloaders XXX 4513 wrpr %g0, 0, %canrestore ! Make sure we have no freeloaders XXX
4495 wrpr %g0, WSTATE_USER, %wstate ! Save things to user space 4514 wrpr %g0, WSTATE_USER, %wstate ! Save things to user space
4496 mov %g7, %g5 ! We already did one restore 4515 mov %g7, %g5 ! We already did one restore
44974: 45164:
4498 rdpr %canrestore, %g4 4517 rdpr %canrestore, %g4
4499 inc %g4 4518 inc %g4
4500 deccc %g5 4519 deccc %g5
4501 wrpr %g4, 0, %cleanwin ! Make *sure* we don't trap to cleanwin 4520 wrpr %g4, 0, %cleanwin ! Make *sure* we don't trap to cleanwin
4502 bge,a,pt %xcc, 4b ! return to starting regwin 4521 bge,a,pt %xcc, 4b ! return to starting regwin
4503 save %g0, %g0, %g0 ! This may force a datafault 4522 save %g0, %g0, %g0 ! This may force a datafault
4504 4523
4505#ifdef DEBUG 4524#ifdef DEBUG
4506 wrpr %g0, 0, %tl 4525 wrpr %g0, 0, %tl
4507#endif 4526#endif
4508#ifdef TRAPSTATS 4527#ifdef TRAPSTATS
4509 set _C_LABEL(rftuld), %g5 4528 set _C_LABEL(rftuld), %g5
4510 lduw [%g5], %g4 4529 lduw [%g5], %g4
4511 inc %g4 4530 inc %g4
4512 stw %g4, [%g5] 4531 stw %g4, [%g5]
4513#endif 4532#endif
4514 !! 4533 !!
4515 !! We can't take any save faults in here 'cause they will never be serviced 4534 !! We can't take any save faults in here 'cause they will never be serviced
4516 !! 4535 !!
4517 4536
4518#ifdef DEBUG 4537#ifdef DEBUG
4519 sethi %hi(CPCB), %g5 4538 sethi %hi(CPCB), %g5
4520 LDPTR [%g5 + %lo(CPCB)], %g5 4539 LDPTR [%g5 + %lo(CPCB)], %g5
4521 ldub [%g5 + PCB_NSAVED], %g5 ! Any saved reg windows? 4540 ldub [%g5 + PCB_NSAVED], %g5 ! Any saved reg windows?
4522 tst %g5 4541 tst %g5
4523 tnz %icc, 1; nop ! Debugger if we still have saved windows 4542 tnz %icc, 1; nop ! Debugger if we still have saved windows
4524 bne,a rft_user ! Try starting over again 4543 bne,a rft_user ! Try starting over again
4525 sethi %hi(CPUINFO_VA+CI_WANT_AST), %g7 4544 sethi %hi(CPUINFO_VA+CI_WANT_AST), %g7
4526#endif 4545#endif
4527 /* 4546 /*
4528 * Set up our return trapframe so we can recover if we trap from here 4547 * Set up our return trapframe so we can recover if we trap from here
4529 * on in. 4548 * on in.
4530 */ 4549 */
4531 wrpr %g0, 1, %tl ! Set up the trap state 4550 wrpr %g0, 1, %tl ! Set up the trap state
4532 wrpr %g2, 0, %tpc 4551 wrpr %g2, 0, %tpc
4533 wrpr %g3, 0, %tnpc 4552 wrpr %g3, 0, %tnpc
4534 ba,pt %icc, 6f 4553 ba,pt %icc, 6f
4535 wrpr %g1, %g0, %tstate 4554 wrpr %g1, %g0, %tstate
4536 4555
45375: 45565:
4538 /* 4557 /*
4539 * Set up our return trapframe so we can recover if we trap from here 4558 * Set up our return trapframe so we can recover if we trap from here
4540 * on in. 4559 * on in.
4541 */ 4560 */
4542 wrpr %g0, 1, %tl ! Set up the trap state 4561 wrpr %g0, 1, %tl ! Set up the trap state
4543 wrpr %g2, 0, %tpc 4562 wrpr %g2, 0, %tpc
4544 wrpr %g3, 0, %tnpc 4563 wrpr %g3, 0, %tnpc
4545 wrpr %g1, %g0, %tstate 4564 wrpr %g1, %g0, %tstate
4546 restore 4565 restore
45476: 45666:
4548 CHKPT(%g4,%g7,0xa) 4567 CHKPT(%g4,%g7,0xa)
4549 rdpr %canrestore, %g5 4568 rdpr %canrestore, %g5
4550 wrpr %g5, 0, %cleanwin ! Force cleanup of kernel windows 4569 wrpr %g5, 0, %cleanwin ! Force cleanup of kernel windows
4551 4570
4552#ifdef NOTDEF_DEBUG 4571#ifdef NOTDEF_DEBUG
4553 ldx [%g6 + CC64FSZ + STKB + TF_L + (0*8)], %g5! DEBUG -- get proper value for %l0 4572 ldx [%g6 + CC64FSZ + STKB + TF_L + (0*8)], %g5! DEBUG -- get proper value for %l0
4554 cmp %l0, %g5 4573 cmp %l0, %g5
4555 be,a,pt %icc, 1f 4574 be,a,pt %icc, 1f
4556 nop 4575 nop
4557! sir ! WATCHDOG 4576! sir ! WATCHDOG
4558 set badregs, %g1 ! Save the suspect regs 4577 set badregs, %g1 ! Save the suspect regs
4559 stw %l0, [%g1+(4*0)] 4578 stw %l0, [%g1+(4*0)]
4560 stw %l1, [%g1+(4*1)] 4579 stw %l1, [%g1+(4*1)]
4561 stw %l2, [%g1+(4*2)] 4580 stw %l2, [%g1+(4*2)]
4562 stw %l3, [%g1+(4*3)] 4581 stw %l3, [%g1+(4*3)]
4563 stw %l4, [%g1+(4*4)] 4582 stw %l4, [%g1+(4*4)]
4564 stw %l5, [%g1+(4*5)] 4583 stw %l5, [%g1+(4*5)]
4565 stw %l6, [%g1+(4*6)] 4584 stw %l6, [%g1+(4*6)]
4566 stw %l7, [%g1+(4*7)] 4585 stw %l7, [%g1+(4*7)]
4567 stw %i0, [%g1+(4*8)+(4*0)] 4586 stw %i0, [%g1+(4*8)+(4*0)]
4568 stw %i1, [%g1+(4*8)+(4*1)] 4587 stw %i1, [%g1+(4*8)+(4*1)]
4569 stw %i2, [%g1+(4*8)+(4*2)] 4588 stw %i2, [%g1+(4*8)+(4*2)]
4570 stw %i3, [%g1+(4*8)+(4*3)] 4589 stw %i3, [%g1+(4*8)+(4*3)]
4571 stw %i4, [%g1+(4*8)+(4*4)] 4590 stw %i4, [%g1+(4*8)+(4*4)]
4572 stw %i5, [%g1+(4*8)+(4*5)] 4591 stw %i5, [%g1+(4*8)+(4*5)]
4573 stw %i6, [%g1+(4*8)+(4*6)] 4592 stw %i6, [%g1+(4*8)+(4*6)]
4574 stw %i7, [%g1+(4*8)+(4*7)] 4593 stw %i7, [%g1+(4*8)+(4*7)]
4575 save 4594 save
4576 inc %g7 4595 inc %g7
4577 wrpr %g7, 0, %otherwin 4596 wrpr %g7, 0, %otherwin
4578 wrpr %g0, 0, %canrestore 4597 wrpr %g0, 0, %canrestore
4579 wrpr %g0, WSTATE_KERN, %wstate ! Need to know where our sp points 4598 wrpr %g0, WSTATE_KERN, %wstate ! Need to know where our sp points
4580 set rft_wcnt, %g4 ! Restore nsaved before trapping 4599 set rft_wcnt, %g4 ! Restore nsaved before trapping
4581 sethi %hi(CPCB), %g6 4600 sethi %hi(CPCB), %g6
4582 LDPTR [%g6 + %lo(CPCB)], %g6 4601 LDPTR [%g6 + %lo(CPCB)], %g6
4583 lduw [%g4], %g4 4602 lduw [%g4], %g4
4584 stb %g4, [%g6 + PCB_NSAVED] 4603 stb %g4, [%g6 + PCB_NSAVED]
4585 ta 1 4604 ta 1
4586 sir 4605 sir
4587 .data 4606 .data
4588badregs: 4607badregs:
4589 .space 16*4 4608 .space 16*4
4590 .text 4609 .text
45911: 46101:
4592#endif 4611#endif
4593 4612
4594 rdpr %tstate, %g1 4613 rdpr %tstate, %g1
4595 rdpr %cwp, %g7 ! Find our cur window 4614 rdpr %cwp, %g7 ! Find our cur window
4596 andn %g1, CWP, %g1 ! Clear it from %tstate 4615 andn %g1, CWP, %g1 ! Clear it from %tstate
4597 wrpr %g1, %g7, %tstate ! Set %tstate with %cwp 4616 wrpr %g1, %g7, %tstate ! Set %tstate with %cwp
4598 CHKPT(%g4,%g7,0xb) 4617 CHKPT(%g4,%g7,0xb)
4599 4618
4600 wr %g0, ASI_DMMU, %asi ! restore the user context 4619 wr %g0, ASI_DMMU, %asi ! restore the user context
4601 ldxa [CTX_SECONDARY] %asi, %g4 4620 ldxa [CTX_SECONDARY] %asi, %g4
4602 sethi %hi(KERNBASE), %g7 ! Should not be needed due to retry 4621 sethi %hi(KERNBASE), %g7 ! Should not be needed due to retry
4603 stxa %g4, [CTX_PRIMARY] %asi 4622 stxa %g4, [CTX_PRIMARY] %asi
4604 membar #Sync ! Should not be needed due to retry 4623 membar #Sync ! Should not be needed due to retry
4605 flush %g7 ! Should not be needed due to retry 4624 flush %g7 ! Should not be needed due to retry
4606 CLRTT 4625 CLRTT
4607 CHKPT(%g4,%g7,0xd) 4626 CHKPT(%g4,%g7,0xd)
4608#ifdef TRAPSTATS 4627#ifdef TRAPSTATS
4609 set _C_LABEL(rftudone), %g1 4628 set _C_LABEL(rftudone), %g1
4610 lduw [%g1], %g2 4629 lduw [%g1], %g2
4611 inc %g2 4630 inc %g2
4612 stw %g2, [%g1] 4631 stw %g2, [%g1]
4613#endif 4632#endif
4614#ifdef DEBUG 4633#ifdef DEBUG
4615 sethi %hi(CPCB), %g5 4634 sethi %hi(CPCB), %g5
4616 LDPTR [%g5 + %lo(CPCB)], %g5 4635 LDPTR [%g5 + %lo(CPCB)], %g5
4617 ldub [%g5 + PCB_NSAVED], %g5 ! Any saved reg windows? 4636 ldub [%g5 + PCB_NSAVED], %g5 ! Any saved reg windows?
4618 tst %g5 4637 tst %g5
4619 tnz %icc, 1; nop ! Debugger if we still have saved windows! 4638 tnz %icc, 1; nop ! Debugger if we still have saved windows!
4620#endif 4639#endif
4621 wrpr %g0, 0, %pil ! Enable all interrupts 4640 wrpr %g0, 0, %pil ! Enable all interrupts
4622 retry 4641 retry
4623 4642
4624! exported end marker for kernel gdb 4643! exported end marker for kernel gdb
4625 .globl _C_LABEL(endtrapcode) 4644 .globl _C_LABEL(endtrapcode)
4626_C_LABEL(endtrapcode): 4645_C_LABEL(endtrapcode):
4627 4646
4628#ifdef DDB 4647#ifdef DDB
4629!!! 4648!!!
4630!!! Dump the DTLB to phys address in %o0 and print it 4649!!! Dump the DTLB to phys address in %o0 and print it
4631!!! 4650!!!
4632!!! Only toast a few %o registers 4651!!! Only toast a few %o registers
4633!!! 4652!!!
4634 4653
4635ENTRY_NOPROFILE(dump_dtlb) 4654ENTRY_NOPROFILE(dump_dtlb)
4636 clr %o1 4655 clr %o1
4637 add %o1, (64 * 8), %o3 4656 add %o1, (64 * 8), %o3
46381: 46571:
4639 ldxa [%o1] ASI_DMMU_TLB_TAG, %o2 4658 ldxa [%o1] ASI_DMMU_TLB_TAG, %o2
4640 membar #Sync 4659 membar #Sync
4641 stx %o2, [%o0] 4660 stx %o2, [%o0]
4642 membar #Sync 4661 membar #Sync
4643 inc 8, %o0 4662 inc 8, %o0
4644 ldxa [%o1] ASI_DMMU_TLB_DATA, %o4 4663 ldxa [%o1] ASI_DMMU_TLB_DATA, %o4
4645 membar #Sync 4664 membar #Sync
4646 inc 8, %o1 4665 inc 8, %o1
4647 stx %o4, [%o0] 4666 stx %o4, [%o0]
4648 cmp %o1, %o3 4667 cmp %o1, %o3
4649 membar #Sync 4668 membar #Sync
4650 bl 1b 4669 bl 1b
4651 inc 8, %o0 4670 inc 8, %o0
4652 4671
4653 retl 4672 retl
4654 nop 4673 nop
4655 4674
4656ENTRY_NOPROFILE(dump_itlb) 4675ENTRY_NOPROFILE(dump_itlb)
4657 clr %o1 4676 clr %o1
4658 add %o1, (64 * 8), %o3 4677 add %o1, (64 * 8), %o3
46591: 46781:
4660 ldxa [%o1] ASI_IMMU_TLB_TAG, %o2 4679 ldxa [%o1] ASI_IMMU_TLB_TAG, %o2
4661 membar #Sync 4680 membar #Sync
4662 stx %o2, [%o0] 4681 stx %o2, [%o0]
4663 membar #Sync 4682 membar #Sync
4664 inc 8, %o0 4683 inc 8, %o0
4665 ldxa [%o1] ASI_IMMU_TLB_DATA, %o4 4684 ldxa [%o1] ASI_IMMU_TLB_DATA, %o4
4666 membar #Sync 4685 membar #Sync
4667 inc 8, %o1 4686 inc 8, %o1
4668 stx %o4, [%o0] 4687 stx %o4, [%o0]
4669 cmp %o1, %o3 4688 cmp %o1, %o3
4670 membar #Sync 4689 membar #Sync
4671 bl 1b 4690 bl 1b
4672 inc 8, %o0 4691 inc 8, %o0
4673 4692
4674 retl 4693 retl
4675 nop 4694 nop
4676 4695
4677#ifdef _LP64 4696#ifdef _LP64
4678ENTRY_NOPROFILE(print_dtlb) 4697ENTRY_NOPROFILE(print_dtlb)
4679 save %sp, -CC64FSZ, %sp 4698 save %sp, -CC64FSZ, %sp
4680 clr %l1 4699 clr %l1
4681 add %l1, (64 * 8), %l3 4700 add %l1, (64 * 8), %l3
4682 clr %l2 4701 clr %l2
46831: 47021:
4684 ldxa [%l1] ASI_DMMU_TLB_TAG, %o2 4703 ldxa [%l1] ASI_DMMU_TLB_TAG, %o2
4685 membar #Sync 4704 membar #Sync
4686 mov %l2, %o1 4705 mov %l2, %o1
4687 ldxa [%l1] ASI_DMMU_TLB_DATA, %o3 4706 ldxa [%l1] ASI_DMMU_TLB_DATA, %o3
4688 membar #Sync 4707 membar #Sync
4689 inc %l2 4708 inc %l2
4690 set 2f, %o0 4709 set 2f, %o0
4691 call _C_LABEL(db_printf) 4710 call _C_LABEL(db_printf)
4692 inc 8, %l1 4711 inc 8, %l1
4693 4712
4694 ldxa [%l1] ASI_DMMU_TLB_TAG, %o2 4713 ldxa [%l1] ASI_DMMU_TLB_TAG, %o2
4695 membar #Sync 4714 membar #Sync
4696 mov %l2, %o1 4715 mov %l2, %o1
4697 ldxa [%l1] ASI_DMMU_TLB_DATA, %o3 4716 ldxa [%l1] ASI_DMMU_TLB_DATA, %o3
4698 membar #Sync 4717 membar #Sync
4699 inc %l2 4718 inc %l2
4700 set 3f, %o0 4719 set 3f, %o0
4701 call _C_LABEL(db_printf) 4720 call _C_LABEL(db_printf)
4702 inc 8, %l1 4721 inc 8, %l1
4703 4722
4704 cmp %l1, %l3 4723 cmp %l1, %l3
4705 bl 1b 4724 bl 1b
4706 inc 8, %l0 4725 inc 8, %l0
4707 4726
4708 ret 4727 ret
4709 restore 4728 restore
4710 4729
4711 4730
4712ENTRY_NOPROFILE(print_itlb) 4731ENTRY_NOPROFILE(print_itlb)
4713 save %sp, -CC64FSZ, %sp 4732 save %sp, -CC64FSZ, %sp
4714 clr %l1 4733 clr %l1
4715 add %l1, (64 * 8), %l3 4734 add %l1, (64 * 8), %l3
4716 clr %l2 4735 clr %l2
47171: 47361:
4718 ldxa [%l1] ASI_IMMU_TLB_TAG, %o2 4737 ldxa [%l1] ASI_IMMU_TLB_TAG, %o2
4719 membar #Sync 4738 membar #Sync
4720 mov %l2, %o1 4739 mov %l2, %o1
4721 ldxa [%l1] ASI_IMMU_TLB_DATA, %o3 4740 ldxa [%l1] ASI_IMMU_TLB_DATA, %o3
4722 membar #Sync 4741 membar #Sync
4723 inc %l2 4742 inc %l2
4724 set 2f, %o0 4743 set 2f, %o0
4725 call _C_LABEL(db_printf) 4744 call _C_LABEL(db_printf)
4726 inc 8, %l1 4745 inc 8, %l1
4727 4746
4728 ldxa [%l1] ASI_IMMU_TLB_TAG, %o2 4747 ldxa [%l1] ASI_IMMU_TLB_TAG, %o2
4729 membar #Sync 4748 membar #Sync
4730 mov %l2, %o1 4749 mov %l2, %o1
4731 ldxa [%l1] ASI_IMMU_TLB_DATA, %o3 4750 ldxa [%l1] ASI_IMMU_TLB_DATA, %o3
4732 membar #Sync 4751 membar #Sync
4733 inc %l2 4752 inc %l2
4734 set 3f, %o0 4753 set 3f, %o0
4735 call _C_LABEL(db_printf) 4754 call _C_LABEL(db_printf)
4736 inc 8, %l1 4755 inc 8, %l1
4737 4756
4738 cmp %l1, %l3 4757 cmp %l1, %l3
4739 bl 1b 4758 bl 1b
4740 inc 8, %l0 4759 inc 8, %l0
4741 4760
4742 ret 4761 ret
4743 restore 4762 restore
4744 4763
4745 .data 4764 .data
47462: 47652:
4747 .asciz "%2d:%016lx %016lx " 4766 .asciz "%2d:%016lx %016lx "
47483: 47673:
4749 .asciz "%2d:%016lx %016lx\r\n" 4768 .asciz "%2d:%016lx %016lx\r\n"
4750 .text 4769 .text
4751#else 4770#else
4752ENTRY_NOPROFILE(print_dtlb) 4771ENTRY_NOPROFILE(print_dtlb)
4753 save %sp, -CC64FSZ, %sp 4772 save %sp, -CC64FSZ, %sp
4754 clr %l1 4773 clr %l1
4755 add %l1, (64 * 8), %l3 4774 add %l1, (64 * 8), %l3
4756 clr %l2 4775 clr %l2
47571: 47761:
4758 ldxa [%l1] ASI_DMMU_TLB_TAG, %o2 4777 ldxa [%l1] ASI_DMMU_TLB_TAG, %o2
4759 membar #Sync 4778 membar #Sync
4760 srl %o2, 0, %o3 4779 srl %o2, 0, %o3
4761 mov %l2, %o1 4780 mov %l2, %o1
4762 srax %o2, 32, %o2 4781 srax %o2, 32, %o2
4763 ldxa [%l1] ASI_DMMU_TLB_DATA, %o4 4782 ldxa [%l1] ASI_DMMU_TLB_DATA, %o4
4764 membar #Sync 4783 membar #Sync
4765 srl %o4, 0, %o5 4784 srl %o4, 0, %o5
4766 inc %l2 4785 inc %l2
4767 srax %o4, 32, %o4 4786 srax %o4, 32, %o4
4768 set 2f, %o0 4787 set 2f, %o0
4769 call _C_LABEL(db_printf) 4788 call _C_LABEL(db_printf)
4770 inc 8, %l1 4789 inc 8, %l1
4771 4790
4772 ldxa [%l1] ASI_DMMU_TLB_TAG, %o2 4791 ldxa [%l1] ASI_DMMU_TLB_TAG, %o2
4773 membar #Sync 4792 membar #Sync
4774 srl %o2, 0, %o3 4793 srl %o2, 0, %o3
4775 mov %l2, %o1 4794 mov %l2, %o1
4776 srax %o2, 32, %o2 4795 srax %o2, 32, %o2
4777 ldxa [%l1] ASI_DMMU_TLB_DATA, %o4 4796 ldxa [%l1] ASI_DMMU_TLB_DATA, %o4
4778 membar #Sync 4797 membar #Sync
4779 srl %o4, 0, %o5 4798 srl %o4, 0, %o5
4780 inc %l2 4799 inc %l2
4781 srax %o4, 32, %o4 4800 srax %o4, 32, %o4
4782 set 3f, %o0 4801 set 3f, %o0
4783 call _C_LABEL(db_printf) 4802 call _C_LABEL(db_printf)
4784 inc 8, %l1 4803 inc 8, %l1
4785 4804
4786 cmp %l1, %l3 4805 cmp %l1, %l3
4787 bl 1b 4806 bl 1b
4788 inc 8, %l0 4807 inc 8, %l0
4789 4808
4790 ret 4809 ret
4791 restore 4810 restore
4792 4811
4793ENTRY_NOPROFILE(print_itlb) 4812ENTRY_NOPROFILE(print_itlb)
4794 save %sp, -CC64FSZ, %sp 4813 save %sp, -CC64FSZ, %sp
4795 clr %l1 4814 clr %l1
4796 add %l1, (64 * 8), %l3 4815 add %l1, (64 * 8), %l3
4797 clr %l2 4816 clr %l2
47981: 48171:
4799 ldxa [%l1] ASI_IMMU_TLB_TAG, %o2 4818 ldxa [%l1] ASI_IMMU_TLB_TAG, %o2
4800 membar #Sync 4819 membar #Sync
4801 srl %o2, 0, %o3 4820 srl %o2, 0, %o3
4802 mov %l2, %o1 4821 mov %l2, %o1
4803 srax %o2, 32, %o2 4822 srax %o2, 32, %o2
4804 ldxa [%l1] ASI_IMMU_TLB_DATA, %o4 4823 ldxa [%l1] ASI_IMMU_TLB_DATA, %o4
4805 membar #Sync 4824 membar #Sync
4806 srl %o4, 0, %o5 4825 srl %o4, 0, %o5
4807 inc %l2 4826 inc %l2
4808 srax %o4, 32, %o4 4827 srax %o4, 32, %o4
4809 set 2f, %o0 4828 set 2f, %o0
4810 call _C_LABEL(db_printf) 4829 call _C_LABEL(db_printf)
4811 inc 8, %l1 4830 inc 8, %l1
4812 4831
4813 ldxa [%l1] ASI_IMMU_TLB_TAG, %o2 4832 ldxa [%l1] ASI_IMMU_TLB_TAG, %o2
4814 membar #Sync 4833 membar #Sync
4815 srl %o2, 0, %o3 4834 srl %o2, 0, %o3
4816 mov %l2, %o1 4835 mov %l2, %o1
4817 srax %o2, 32, %o2 4836 srax %o2, 32, %o2
4818 ldxa [%l1] ASI_IMMU_TLB_DATA, %o4 4837 ldxa [%l1] ASI_IMMU_TLB_DATA, %o4
4819 membar #Sync 4838 membar #Sync
4820 srl %o4, 0, %o5 4839 srl %o4, 0, %o5
4821 inc %l2 4840 inc %l2
4822 srax %o4, 32, %o4 4841 srax %o4, 32, %o4
4823 set 3f, %o0 4842 set 3f, %o0
4824 call _C_LABEL(db_printf) 4843 call _C_LABEL(db_printf)
4825 inc 8, %l1 4844 inc 8, %l1
4826 4845
4827 cmp %l1, %l3 4846 cmp %l1, %l3
4828 bl 1b 4847 bl 1b
4829 inc 8, %l0 4848 inc 8, %l0
4830 4849
4831 ret 4850 ret
4832 restore 4851 restore
4833 4852
4834 .data 4853 .data
48352: 48542:
4836 .asciz "%2d:%08x:%08x %08x:%08x " 4855 .asciz "%2d:%08x:%08x %08x:%08x "
48373: 48563:
4838 .asciz "%2d:%08x:%08x %08x:%08x\r\n" 4857 .asciz "%2d:%08x:%08x %08x:%08x\r\n"
4839 .text 4858 .text
4840#endif 4859#endif
4841#endif 4860#endif
4842 4861
4843/* 4862/*
4844 * Kernel entry point. 4863 * Kernel entry point.
4845 * 4864 *
4846 * The contract between bootloader and kernel is: 4865 * The contract between bootloader and kernel is:
4847 * 4866 *
4848 * %o0 OpenFirmware entry point, to keep Sun's updaters happy 4867 * %o0 OpenFirmware entry point, to keep Sun's updaters happy
4849 * %o1 Address of boot information vector (see bootinfo.h) 4868 * %o1 Address of boot information vector (see bootinfo.h)
4850 * %o2 Length of the vector, in bytes 4869 * %o2 Length of the vector, in bytes
4851 * %o3 OpenFirmware entry point, to mimic Sun bootloader behavior 4870 * %o3 OpenFirmware entry point, to mimic Sun bootloader behavior
4852 * %o4 OpenFirmware, to meet earlier NetBSD kernels expectations 4871 * %o4 OpenFirmware, to meet earlier NetBSD kernels expectations
4853 */ 4872 */
4854 .align 8 4873 .align 8
4855start: 4874start:
4856dostart: 4875dostart:
4857 mov 1, %g1 4876 mov 1, %g1
4858 sllx %g1, 63, %g1 4877 sllx %g1, 63, %g1
4859 wr %g1, TICK_CMPR ! XXXXXXX clear and disable %tick_cmpr for now 4878 wr %g1, TICK_CMPR ! XXXXXXX clear and disable %tick_cmpr for now
4860 /* 4879 /*
4861 * Startup. 4880 * Startup.
4862 * 4881 *
4863 * The Sun FCODE bootloader is nice and loads us where we want 4882 * The Sun FCODE bootloader is nice and loads us where we want
4864 * to be. We have a full set of mappings already set up for us. 4883 * to be. We have a full set of mappings already set up for us.
4865 * 4884 *
4866 * I think we end up having an entire 16M allocated to us. 4885 * I think we end up having an entire 16M allocated to us.
4867 * 4886 *
4868 * We enter with the prom entry vector in %o0, dvec in %o1, 4887 * We enter with the prom entry vector in %o0, dvec in %o1,
4869 * and the bootops vector in %o2. 4888 * and the bootops vector in %o2.
4870 * 4889 *
4871 * All we need to do is: 4890 * All we need to do is:
4872 * 4891 *
4873 * 1: Save the prom vector 4892 * 1: Save the prom vector
4874 * 4893 *
4875 * 2: Create a decent stack for ourselves 4894 * 2: Create a decent stack for ourselves
4876 * 4895 *
4877 * 3: Install the permanent 4MB kernel mapping 4896 * 3: Install the permanent 4MB kernel mapping
4878 * 4897 *
4879 * 4: Call the C language initialization code 4898 * 4: Call the C language initialization code
4880 * 4899 *
4881 */ 4900 */
4882 4901
4883 /* 4902 /*
4884 * Set the psr into a known state: 4903 * Set the psr into a known state:
4885 * Set supervisor mode, interrupt level >= 13, traps enabled 4904 * Set supervisor mode, interrupt level >= 13, traps enabled
4886 */ 4905 */
4887 wrpr %g0, 13, %pil 4906 wrpr %g0, 13, %pil
4888 wrpr %g0, PSTATE_INTR|PSTATE_PEF, %pstate 4907 wrpr %g0, PSTATE_INTR|PSTATE_PEF, %pstate
4889 wr %g0, FPRS_FEF, %fprs ! Turn on FPU 4908 wr %g0, FPRS_FEF, %fprs ! Turn on FPU
4890 4909
4891 /* 4910 /*
4892 * Step 2: Set up a v8-like stack if we need to 4911 * Step 2: Set up a v8-like stack if we need to
4893 */ 4912 */
4894 4913
4895#ifdef _LP64 4914#ifdef _LP64
4896 btst 1, %sp 4915 btst 1, %sp
4897 bnz,pt %icc, 0f 4916 bnz,pt %icc, 0f
4898 nop 4917 nop
4899 add %sp, -BIAS, %sp 4918 add %sp, -BIAS, %sp
4900#else 4919#else
4901 btst 1, %sp 4920 btst 1, %sp
4902 bz,pt %icc, 0f 4921 bz,pt %icc, 0f
4903 nop 4922 nop
4904 add %sp, BIAS, %sp 4923 add %sp, BIAS, %sp
4905#endif 4924#endif
49060: 49250:
4907 4926
4908 call _C_LABEL(bootstrap) 4927 call _C_LABEL(bootstrap)
4909 clr %g4 ! Clear data segment pointer 4928 clr %g4 ! Clear data segment pointer
4910 4929
4911/* 4930/*
4912 * Initialize the boot CPU. Basically: 4931 * Initialize the boot CPU. Basically:
4913 * 4932 *
4914 * Locate the cpu_info structure for this CPU. 4933 * Locate the cpu_info structure for this CPU.
4915 * Establish a locked mapping for interrupt stack. 4934 * Establish a locked mapping for interrupt stack.
4916 * Switch to the initial stack. 4935 * Switch to the initial stack.
4917 * Call the routine passed in in cpu_info->ci_spinup 4936 * Call the routine passed in in cpu_info->ci_spinup
4918 */ 4937 */
4919 4938
4920#ifdef NO_VCACHE 4939#ifdef NO_VCACHE
4921#define TTE_DATABITS TTE_L|TTE_CP|TTE_P|TTE_W 4940#define TTE_DATABITS TTE_L|TTE_CP|TTE_P|TTE_W
4922#else 4941#else
4923#define TTE_DATABITS TTE_L|TTE_CP|TTE_CV|TTE_P|TTE_W 4942#define TTE_DATABITS TTE_L|TTE_CP|TTE_CV|TTE_P|TTE_W
4924#endif 4943#endif
4925 4944
4926 4945
4927ENTRY_NOPROFILE(cpu_initialize) /* for cosmetic reasons - nicer backtrace */ 4946ENTRY_NOPROFILE(cpu_initialize) /* for cosmetic reasons - nicer backtrace */
4928 /* 4947 /*
4929 * Step 5: is no more. 4948 * Step 5: is no more.
4930 */ 4949 */
4931  4950
4932 /* 4951 /*
4933 * Step 6: hunt through cpus list and find the one that 4952 * Step 6: hunt through cpus list and find the one that
4934 * matches our UPAID. 4953 * matches our UPAID.
4935 */ 4954 */
4936 sethi %hi(_C_LABEL(cpus)), %l1 4955 sethi %hi(_C_LABEL(cpus)), %l1
4937 ldxa [%g0] ASI_MID_REG, %l2 4956 ldxa [%g0] ASI_MID_REG, %l2
4938 LDPTR [%l1 + %lo(_C_LABEL(cpus))], %l1 4957 LDPTR [%l1 + %lo(_C_LABEL(cpus))], %l1
4939 srax %l2, 17, %l2 ! Isolate UPAID from CPU reg 4958 srax %l2, 17, %l2 ! Isolate UPAID from CPU reg
4940 and %l2, 0x1f, %l2 4959 and %l2, 0x1f, %l2
49410: 49600:
4942 ld [%l1 + CI_UPAID], %l3 ! Load UPAID 4961 ld [%l1 + CI_UPAID], %l3 ! Load UPAID
4943 cmp %l3, %l2 ! Does it match? 4962 cmp %l3, %l2 ! Does it match?
4944 bne,a,pt %icc, 0b ! no 4963 bne,a,pt %icc, 0b ! no
4945 LDPTR [%l1 + CI_NEXT], %l1 ! Load next cpu_info pointer 4964 LDPTR [%l1 + CI_NEXT], %l1 ! Load next cpu_info pointer
4946 4965
4947 4966
4948 /* 4967 /*
4949 * Get pointer to our cpu_info struct 4968 * Get pointer to our cpu_info struct
4950 */ 4969 */
4951 4970
4952 ldx [%l1 + CI_PADDR], %l1 ! Load the interrupt stack's PA 4971 ldx [%l1 + CI_PADDR], %l1 ! Load the interrupt stack's PA
4953 4972
4954 sethi %hi(0xa0000000), %l2 ! V=1|SZ=01|NFO=0|IE=0 4973 sethi %hi(0xa0000000), %l2 ! V=1|SZ=01|NFO=0|IE=0
4955 sllx %l2, 32, %l2 ! Shift it into place 4974 sllx %l2, 32, %l2 ! Shift it into place
4956 4975
4957 mov -1, %l3 ! Create a nice mask 4976 mov -1, %l3 ! Create a nice mask
4958 sllx %l3, 41, %l4 ! Mask off high bits 4977 sllx %l3, 41, %l4 ! Mask off high bits
4959 or %l4, 0xfff, %l4 ! We can just load this in 12 (of 13) bits 4978 or %l4, 0xfff, %l4 ! We can just load this in 12 (of 13) bits
4960 4979
4961 andn %l1, %l4, %l1 ! Mask the phys page number 4980 andn %l1, %l4, %l1 ! Mask the phys page number
4962 4981
4963 or %l2, %l1, %l1 ! Now take care of the high bits 4982 or %l2, %l1, %l1 ! Now take care of the high bits
4964 or %l1, TTE_DATABITS, %l2 ! And low bits: L=1|CP=1|CV=?|E=0|P=1|W=1|G=0 4983 or %l1, TTE_DATABITS, %l2 ! And low bits: L=1|CP=1|CV=?|E=0|P=1|W=1|G=0
4965 4984
4966 !! 4985 !!
4967 !! Now, map in the interrupt stack as context==0 4986 !! Now, map in the interrupt stack as context==0
4968 !! 4987 !!
4969 set TLB_TAG_ACCESS, %l5 4988 set TLB_TAG_ACCESS, %l5
4970 set INTSTACK, %l0 4989 set INTSTACK, %l0
4971 stxa %l0, [%l5] ASI_DMMU ! Make DMMU point to it 4990 stxa %l0, [%l5] ASI_DMMU ! Make DMMU point to it
4972 stxa %l2, [%g0] ASI_DMMU_DATA_IN ! Store it 4991 stxa %l2, [%g0] ASI_DMMU_DATA_IN ! Store it
4973 membar #Sync 4992 membar #Sync
4974 4993
4975 !! Setup kernel stack (we rely on curlwp on this cpu 4994 !! Setup kernel stack (we rely on curlwp on this cpu
4976 !! being lwp0 here and it's uarea is mapped special 4995 !! being lwp0 here and it's uarea is mapped special
4977 !! and already accessible here) 4996 !! and already accessible here)
4978 flushw 4997 flushw
4979 sethi %hi(CPUINFO_VA+CI_CURLWP), %l0 4998 sethi %hi(CPUINFO_VA+CI_CURLWP), %l0
4980 LDPTR [%l0 + %lo(CPUINFO_VA+CI_CURLWP)], %l0 4999 LDPTR [%l0 + %lo(CPUINFO_VA+CI_CURLWP)], %l0
4981 set USPACE - TF_SIZE - CC64FSZ, %l1 5000 set USPACE - TF_SIZE - CC64FSZ, %l1
4982 LDPTR [%l0 + L_PCB], %l0 5001 LDPTR [%l0 + L_PCB], %l0
4983 add %l1, %l0, %l0 5002 add %l1, %l0, %l0
4984#ifdef _LP64 5003#ifdef _LP64
4985 andn %l0, 0x0f, %l0 ! Needs to be 16-byte aligned 5004 andn %l0, 0x0f, %l0 ! Needs to be 16-byte aligned
4986 sub %l0, BIAS, %l0 ! and biased 5005 sub %l0, BIAS, %l0 ! and biased
4987#endif 5006#endif
4988 mov %l0, %sp 5007 mov %l0, %sp
4989 flushw 5008 flushw
4990 5009
4991#ifdef DEBUG 5010#ifdef DEBUG
4992 set _C_LABEL(pmapdebug), %o1 5011 set _C_LABEL(pmapdebug), %o1
4993 ld [%o1], %o1 5012 ld [%o1], %o1
4994 sethi %hi(0x40000), %o2 5013 sethi %hi(0x40000), %o2
4995 btst %o2, %o1 5014 btst %o2, %o1
4996 bz 0f 5015 bz 0f
4997  5016
4998 set 1f, %o0 ! Debug printf 5017 set 1f, %o0 ! Debug printf
4999 call _C_LABEL(prom_printf) 5018 call _C_LABEL(prom_printf)
5000 .data 5019 .data
50011: 50201:
5002 .asciz "Setting trap base...\r\n" 5021 .asciz "Setting trap base...\r\n"
5003 _ALIGN 5022 _ALIGN
5004 .text 5023 .text
50050:  50240:
5006#endif 5025#endif
5007 /* 5026 /*
5008 * Step 7: change the trap base register, and install our TSB pointers 5027 * Step 7: change the trap base register, and install our TSB pointers
5009 */ 5028 */
5010 5029
5011 /* 5030 /*
5012 * install our TSB pointers 5031 * install our TSB pointers
5013 */ 5032 */
5014 sethi %hi(CPUINFO_VA+CI_TSB_DMMU), %l0 5033 sethi %hi(CPUINFO_VA+CI_TSB_DMMU), %l0
5015 sethi %hi(CPUINFO_VA+CI_TSB_IMMU), %l1 5034 sethi %hi(CPUINFO_VA+CI_TSB_IMMU), %l1
5016 sethi %hi(_C_LABEL(tsbsize)), %l2 5035 sethi %hi(_C_LABEL(tsbsize)), %l2
5017 sethi %hi(0x1fff), %l3 5036 sethi %hi(0x1fff), %l3
5018 sethi %hi(TSB), %l4 5037 sethi %hi(TSB), %l4
5019 LDPTR [%l0 + %lo(CPUINFO_VA+CI_TSB_DMMU)], %l0 5038 LDPTR [%l0 + %lo(CPUINFO_VA+CI_TSB_DMMU)], %l0
5020 LDPTR [%l1 + %lo(CPUINFO_VA+CI_TSB_IMMU)], %l1 5039 LDPTR [%l1 + %lo(CPUINFO_VA+CI_TSB_IMMU)], %l1
5021 ld [%l2 + %lo(_C_LABEL(tsbsize))], %l2 5040 ld [%l2 + %lo(_C_LABEL(tsbsize))], %l2
5022 or %l3, %lo(0x1fff), %l3 5041 or %l3, %lo(0x1fff), %l3
5023 or %l4, %lo(TSB), %l4 5042 or %l4, %lo(TSB), %l4
5024 5043
5025 andn %l0, %l3, %l0 ! Mask off size and split bits 5044 andn %l0, %l3, %l0 ! Mask off size and split bits
5026 or %l0, %l2, %l0 ! Make a TSB pointer 5045 or %l0, %l2, %l0 ! Make a TSB pointer
5027 stxa %l0, [%l4] ASI_DMMU ! Install data TSB pointer 5046 stxa %l0, [%l4] ASI_DMMU ! Install data TSB pointer
5028 5047
5029 andn %l1, %l3, %l1 ! Mask off size and split bits 5048 andn %l1, %l3, %l1 ! Mask off size and split bits
5030 or %l1, %l2, %l1 ! Make a TSB pointer 5049 or %l1, %l2, %l1 ! Make a TSB pointer
5031 stxa %l1, [%l4] ASI_IMMU ! Install instruction TSB pointer 5050 stxa %l1, [%l4] ASI_IMMU ! Install instruction TSB pointer
5032 membar #Sync 5051 membar #Sync
5033 set 1f, %l1 5052 set 1f, %l1
5034 flush %l1 5053 flush %l1
50351: 50541:
5036 5055
5037 /* set trap table */ 5056 /* set trap table */
5038 set _C_LABEL(trapbase), %l1 5057 set _C_LABEL(trapbase), %l1
5039 call _C_LABEL(prom_set_trap_table) ! Now we should be running 100% from our handlers 5058 call _C_LABEL(prom_set_trap_table) ! Now we should be running 100% from our handlers
5040 mov %l1, %o0 5059 mov %l1, %o0
5041 wrpr %l1, 0, %tba ! Make sure the PROM didn't foul up. 5060 wrpr %l1, 0, %tba ! Make sure the PROM didn't foul up.
5042 5061
5043 /* 5062 /*
5044 * Switch to the kernel mode and run away. 5063 * Switch to the kernel mode and run away.
5045 */ 5064 */
5046 wrpr %g0, WSTATE_KERN, %wstate 5065 wrpr %g0, WSTATE_KERN, %wstate
5047 5066
5048#ifdef DEBUG 5067#ifdef DEBUG
5049 wrpr %g0, 1, %tl ! Debug -- start at tl==3 so we'll watchdog 5068 wrpr %g0, 1, %tl ! Debug -- start at tl==3 so we'll watchdog
5050 wrpr %g0, 0x1ff, %tt ! Debug -- clear out unused trap regs 5069 wrpr %g0, 0x1ff, %tt ! Debug -- clear out unused trap regs
5051 wrpr %g0, 0, %tpc 5070 wrpr %g0, 0, %tpc
5052 wrpr %g0, 0, %tnpc 5071 wrpr %g0, 0, %tnpc
5053 wrpr %g0, 0, %tstate 5072 wrpr %g0, 0, %tstate
5054 wrpr %g0, 0, %tl 5073 wrpr %g0, 0, %tl
5055#endif 5074#endif
5056 5075
5057#ifdef DEBUG 5076#ifdef DEBUG
5058 set _C_LABEL(pmapdebug), %o1 5077 set _C_LABEL(pmapdebug), %o1
5059 ld [%o1], %o1 5078 ld [%o1], %o1
5060 sethi %hi(0x40000), %o2 5079 sethi %hi(0x40000), %o2
5061 btst %o2, %o1 5080 btst %o2, %o1
5062 bz 0f 5081 bz 0f
5063  5082
5064 set 1f, %o0 ! Debug printf 5083 set 1f, %o0 ! Debug printf
5065 call _C_LABEL(prom_printf) 5084 call _C_LABEL(prom_printf)
5066 .data 5085 .data
50671: 50861:
5068 .asciz "Calling startup routine...\r\n" 5087 .asciz "Calling startup routine...\r\n"
5069 _ALIGN 5088 _ALIGN
5070 .text 5089 .text
50710:  50900:
5072#endif 5091#endif
5073 /* 5092 /*
5074 * Call our startup routine. 5093 * Call our startup routine.
5075 */ 5094 */
5076 5095
5077 sethi %hi(CPUINFO_VA+CI_SPINUP), %l0 5096 sethi %hi(CPUINFO_VA+CI_SPINUP), %l0
5078 LDPTR [%l0 + %lo(CPUINFO_VA+CI_SPINUP)], %o1 5097 LDPTR [%l0 + %lo(CPUINFO_VA+CI_SPINUP)], %o1
5079 5098
5080 call %o1 ! Call routine 5099 call %o1 ! Call routine
5081 clr %o0 ! our frame arg is ignored 5100 clr %o0 ! our frame arg is ignored
5082 5101
5083 set 1f, %o0 ! Main should never come back here 5102 set 1f, %o0 ! Main should never come back here
5084 call _C_LABEL(panic) 5103 call _C_LABEL(panic)
5085 nop 5104 nop
5086 .data 5105 .data
50871: 51061:
5088 .asciz "main() returned\r\n" 5107 .asciz "main() returned\r\n"
5089 _ALIGN 5108 _ALIGN
5090 .text 5109 .text
5091 5110
5092#if defined(MULTIPROCESSOR) 5111#if defined(MULTIPROCESSOR)
5093 /* 5112 /*
5094 * cpu_mp_startup is called with: 5113 * cpu_mp_startup is called with:
5095 * 5114 *
5096 * %g2 = cpu_args 5115 * %g2 = cpu_args
5097 */ 5116 */
5098ENTRY(cpu_mp_startup) 5117ENTRY(cpu_mp_startup)
5099 mov 1, %o0 5118 mov 1, %o0
5100 sllx %o0, 63, %o0 5119 sllx %o0, 63, %o0
5101 wr %o0, TICK_CMPR ! XXXXXXX clear and disable %tick_cmpr for now 5120 wr %o0, TICK_CMPR ! XXXXXXX clear and disable %tick_cmpr for now
5102 wrpr %g0, 0, %cleanwin 5121 wrpr %g0, 0, %cleanwin
5103 wrpr %g0, 0, %tl ! Make sure we're not in NUCLEUS mode 5122 wrpr %g0, 0, %tl ! Make sure we're not in NUCLEUS mode
5104 wrpr %g0, WSTATE_KERN, %wstate 5123 wrpr %g0, WSTATE_KERN, %wstate
5105 wrpr %g0, PSTATE_KERN, %pstate 5124 wrpr %g0, PSTATE_KERN, %pstate
5106 flushw 5125 flushw
5107 5126
5108 /* 5127 /*
5109 * Get pointer to our cpu_info struct 5128 * Get pointer to our cpu_info struct
5110 */ 5129 */
5111 ldx [%g2 + CBA_CPUINFO], %l1 ! Load the interrupt stack's PA 5130 ldx [%g2 + CBA_CPUINFO], %l1 ! Load the interrupt stack's PA
5112 sethi %hi(0xa0000000), %l2 ! V=1|SZ=01|NFO=0|IE=0 5131 sethi %hi(0xa0000000), %l2 ! V=1|SZ=01|NFO=0|IE=0
5113 sllx %l2, 32, %l2 ! Shift it into place 5132 sllx %l2, 32, %l2 ! Shift it into place
5114 mov -1, %l3 ! Create a nice mask 5133 mov -1, %l3 ! Create a nice mask
5115 sllx %l3, 41, %l4 ! Mask off high bits 5134 sllx %l3, 41, %l4 ! Mask off high bits
5116 or %l4, 0xfff, %l4 ! We can just load this in 12 (of 13) bits 5135 or %l4, 0xfff, %l4 ! We can just load this in 12 (of 13) bits
5117 andn %l1, %l4, %l1 ! Mask the phys page number 5136 andn %l1, %l4, %l1 ! Mask the phys page number
5118 or %l2, %l1, %l1 ! Now take care of the high bits 5137 or %l2, %l1, %l1 ! Now take care of the high bits
5119 or %l1, TTE_DATABITS, %l2 ! And low bits: L=1|CP=1|CV=?|E=0|P=1|W=1|G=0 5138 or %l1, TTE_DATABITS, %l2 ! And low bits: L=1|CP=1|CV=?|E=0|P=1|W=1|G=0
5120 5139
5121 /* 5140 /*
5122 * Now, map in the interrupt stack & cpu_info as context==0 5141 * Now, map in the interrupt stack & cpu_info as context==0
5123 */ 5142 */
5124 set TLB_TAG_ACCESS, %l5 5143 set TLB_TAG_ACCESS, %l5
5125 set INTSTACK, %l0 5144 set INTSTACK, %l0
5126 stxa %l0, [%l5] ASI_DMMU ! Make DMMU point to it 5145 stxa %l0, [%l5] ASI_DMMU ! Make DMMU point to it
5127 stxa %l2, [%g0] ASI_DMMU_DATA_IN ! Store it 5146 stxa %l2, [%g0] ASI_DMMU_DATA_IN ! Store it
5128 5147
5129 /* 5148 /*
5130 * Set 0 as primary context XXX 5149 * Set 0 as primary context XXX
5131 */ 5150 */
5132 mov CTX_PRIMARY, %o0 5151 mov CTX_PRIMARY, %o0
5133 stxa %g0, [%o0] ASI_DMMU 5152 stxa %g0, [%o0] ASI_DMMU
5134 membar #Sync 5153 membar #Sync
5135 5154
5136 /* 5155 /*
5137 * Temporarily use the interrupt stack 5156 * Temporarily use the interrupt stack
5138 */ 5157 */
5139#ifdef _LP64 5158#ifdef _LP64
5140 set ((EINTSTACK - CC64FSZ - TF_SIZE)) & ~0x0f - BIAS, %sp 5159 set ((EINTSTACK - CC64FSZ - TF_SIZE)) & ~0x0f - BIAS, %sp
5141#else 5160#else
5142 set EINTSTACK - CC64FSZ - TF_SIZE, %sp 5161 set EINTSTACK - CC64FSZ - TF_SIZE, %sp
5143#endif 5162#endif
5144 set 1, %fp 5163 set 1, %fp
5145 clr %i7 5164 clr %i7
5146 5165
5147 /* 5166 /*
5148 * install our TSB pointers 5167 * install our TSB pointers
5149 */ 5168 */
5150 sethi %hi(CPUINFO_VA+CI_TSB_DMMU), %l0 5169 sethi %hi(CPUINFO_VA+CI_TSB_DMMU), %l0
5151 sethi %hi(CPUINFO_VA+CI_TSB_IMMU), %l1 5170 sethi %hi(CPUINFO_VA+CI_TSB_IMMU), %l1
5152 sethi %hi(_C_LABEL(tsbsize)), %l2 5171 sethi %hi(_C_LABEL(tsbsize)), %l2
5153 sethi %hi(0x1fff), %l3 5172 sethi %hi(0x1fff), %l3
5154 sethi %hi(TSB), %l4 5173 sethi %hi(TSB), %l4
5155 LDPTR [%l0 + %lo(CPUINFO_VA+CI_TSB_DMMU)], %l0 5174 LDPTR [%l0 + %lo(CPUINFO_VA+CI_TSB_DMMU)], %l0
5156 LDPTR [%l1 + %lo(CPUINFO_VA+CI_TSB_IMMU)], %l1 5175 LDPTR [%l1 + %lo(CPUINFO_VA+CI_TSB_IMMU)], %l1
5157 ld [%l2 + %lo(_C_LABEL(tsbsize))], %l2 5176 ld [%l2 + %lo(_C_LABEL(tsbsize))], %l2
5158 or %l3, %lo(0x1fff), %l3 5177 or %l3, %lo(0x1fff), %l3
5159 or %l4, %lo(TSB), %l4 5178 or %l4, %lo(TSB), %l4
5160 5179
5161 andn %l0, %l3, %l0 ! Mask off size and split bits 5180 andn %l0, %l3, %l0 ! Mask off size and split bits
5162 or %l0, %l2, %l0 ! Make a TSB pointer 5181 or %l0, %l2, %l0 ! Make a TSB pointer
5163 stxa %l0, [%l4] ASI_DMMU ! Install data TSB pointer 5182 stxa %l0, [%l4] ASI_DMMU ! Install data TSB pointer
5164 membar #Sync 5183 membar #Sync
5165 5184
5166 andn %l1, %l3, %l1 ! Mask off size and split bits 5185 andn %l1, %l3, %l1 ! Mask off size and split bits
5167 or %l1, %l2, %l1 ! Make a TSB pointer 5186 or %l1, %l2, %l1 ! Make a TSB pointer
5168 stxa %l1, [%l4] ASI_IMMU ! Install instruction TSB pointer 5187 stxa %l1, [%l4] ASI_IMMU ! Install instruction TSB pointer
5169 membar #Sync 5188 membar #Sync
5170 set 1f, %o0 5189 set 1f, %o0
5171 flush %o0 5190 flush %o0
51721: 51911:
5173 5192
5174 /* set trap table */ 5193 /* set trap table */
5175 set _C_LABEL(trapbase), %l1 5194 set _C_LABEL(trapbase), %l1
5176 call _C_LABEL(prom_set_trap_table) 5195 call _C_LABEL(prom_set_trap_table)
5177 mov %l1, %o0 5196 mov %l1, %o0
5178 wrpr %l1, 0, %tba ! Make sure the PROM didn't 5197 wrpr %l1, 0, %tba ! Make sure the PROM didn't
5179 ! foul up. 5198 ! foul up.
5180 /* 5199 /*
5181 * Use this CPUs idlelewp's uarea stack 5200 * Use this CPUs idlelewp's uarea stack
5182 */ 5201 */
5183 sethi %hi(CPUINFO_VA+CI_IDLELWP), %l0 5202 sethi %hi(CPUINFO_VA+CI_IDLELWP), %l0
5184 LDPTR [%l0 + %lo(CPUINFO_VA+CI_IDLELWP)], %l0 5203 LDPTR [%l0 + %lo(CPUINFO_VA+CI_IDLELWP)], %l0
5185 set USPACE - TF_SIZE - CC64FSZ, %l1 5204 set USPACE - TF_SIZE - CC64FSZ, %l1
5186 LDPTR [%l0 + L_PCB], %l0 5205 LDPTR [%l0 + L_PCB], %l0
5187 add %l0, %l1, %l0 5206 add %l0, %l1, %l0
5188#ifdef _LP64 5207#ifdef _LP64
5189 andn %l0, 0x0f, %l0 ! Needs to be 16-byte aligned 5208 andn %l0, 0x0f, %l0 ! Needs to be 16-byte aligned
5190 sub %l0, BIAS, %l0 ! and biased 5209 sub %l0, BIAS, %l0 ! and biased
5191#endif 5210#endif
5192 mov %l0, %sp 5211 mov %l0, %sp
5193 flushw 5212 flushw
5194 5213
5195 /* 5214 /*
5196 * Switch to the kernel mode and run away. 5215 * Switch to the kernel mode and run away.
5197 */ 5216 */
5198 wrpr %g0, 13, %pil 5217 wrpr %g0, 13, %pil
5199 wrpr %g0, PSTATE_INTR|PSTATE_PEF, %pstate 5218 wrpr %g0, PSTATE_INTR|PSTATE_PEF, %pstate
5200 wr %g0, FPRS_FEF, %fprs ! Turn on FPU 5219 wr %g0, FPRS_FEF, %fprs ! Turn on FPU
5201 5220
5202 call _C_LABEL(cpu_hatch) 5221 call _C_LABEL(cpu_hatch)
5203 clr %g4 5222 clr %g4
5204 5223
5205 b _C_LABEL(idle_loop) 5224 b _C_LABEL(idle_loop)
5206 clr %o0 5225 clr %o0
5207 5226
5208 NOTREACHED 5227 NOTREACHED
5209 5228
5210 .globl cpu_mp_startup_end 5229 .globl cpu_mp_startup_end
5211cpu_mp_startup_end: 5230cpu_mp_startup_end:
5212#endif /* MULTIPROCESSOR */ 5231#endif /* MULTIPROCESSOR */
5213 5232
5214 .align 8 5233 .align 8
5215ENTRY(get_romtba) 5234ENTRY(get_romtba)
5216 retl 5235 retl
5217 rdpr %tba, %o0 5236 rdpr %tba, %o0
5218 5237
5219/* 5238/*
5220 * openfirmware(cell* param); 5239 * openfirmware(cell* param);
5221 * 5240 *
5222 * OpenFirmware entry point 5241 * OpenFirmware entry point
5223 * 5242 *
5224 * If we're running in 32-bit mode we need to convert to a 64-bit stack 5243 * If we're running in 32-bit mode we need to convert to a 64-bit stack
5225 * and 64-bit cells. The cells we'll allocate off the stack for simplicity. 5244 * and 64-bit cells. The cells we'll allocate off the stack for simplicity.
5226 */ 5245 */
5227 .align 8 5246 .align 8
5228ENTRY(openfirmware) 5247ENTRY(openfirmware)
5229 sethi %hi(romp), %o4 5248 sethi %hi(romp), %o4
5230 andcc %sp, 1, %g0 5249 andcc %sp, 1, %g0
5231 bz,pt %icc, 1f 5250 bz,pt %icc, 1f
5232 LDPTR [%o4+%lo(romp)], %o4 ! v9 stack, just load the addr and callit 5251 LDPTR [%o4+%lo(romp)], %o4 ! v9 stack, just load the addr and callit
5233 save %sp, -CC64FSZ, %sp 5252 save %sp, -CC64FSZ, %sp
5234 rdpr %pil, %i2 5253 rdpr %pil, %i2
5235 mov PIL_HIGH, %i3 5254 mov PIL_HIGH, %i3
5236 cmp %i3, %i2 5255 cmp %i3, %i2
5237 movle %icc, %i2, %i3 5256 movle %icc, %i2, %i3
5238 wrpr %g0, %i3, %pil 5257 wrpr %g0, %i3, %pil
5239 mov %i0, %o0 5258 mov %i0, %o0
5240 mov %g1, %l1 5259 mov %g1, %l1
5241 mov %g2, %l2 5260 mov %g2, %l2
5242 mov %g3, %l3 5261 mov %g3, %l3
5243 mov %g4, %l4 5262 mov %g4, %l4
5244 mov %g5, %l5 5263 mov %g5, %l5
5245 mov %g6, %l6 5264 mov %g6, %l6
5246 mov %g7, %l7 5265 mov %g7, %l7
5247 rdpr %pstate, %l0 5266 rdpr %pstate, %l0
5248 jmpl %i4, %o7 5267 jmpl %i4, %o7
5249#if !defined(_LP64) 5268#if !defined(_LP64)
5250 wrpr %g0, PSTATE_PROM, %pstate 5269 wrpr %g0, PSTATE_PROM, %pstate
5251#else 5270#else
5252 wrpr %g0, PSTATE_PROM|PSTATE_IE, %pstate 5271 wrpr %g0, PSTATE_PROM|PSTATE_IE, %pstate
5253#endif 5272#endif
5254 wrpr %l0, %g0, %pstate 5273 wrpr %l0, %g0, %pstate
5255 mov %l1, %g1 5274 mov %l1, %g1
5256 mov %l2, %g2 5275 mov %l2, %g2
5257 mov %l3, %g3 5276 mov %l3, %g3
5258 mov %l4, %g4 5277 mov %l4, %g4
5259 mov %l5, %g5 5278 mov %l5, %g5
5260 mov %l6, %g6 5279 mov %l6, %g6
5261 mov %l7, %g7 5280 mov %l7, %g7
5262 wrpr %i2, 0, %pil 5281 wrpr %i2, 0, %pil
5263 ret 5282 ret
5264 restore %o0, %g0, %o0 5283 restore %o0, %g0, %o0
5265 5284
52661: ! v8 -- need to screw with stack & params 52851: ! v8 -- need to screw with stack & params
5267#ifdef NOTDEF_DEBUG 5286#ifdef NOTDEF_DEBUG
5268 mov %o7, %o5 5287 mov %o7, %o5
5269 call globreg_check 5288 call globreg_check
5270 nop 5289 nop
5271 mov %o5, %o7 5290 mov %o5, %o7
5272#endif 5291#endif
5273 save %sp, -CC64FSZ, %sp ! Get a new 64-bit stack frame 5292 save %sp, -CC64FSZ, %sp ! Get a new 64-bit stack frame
5274 add %sp, -BIAS, %sp 5293 add %sp, -BIAS, %sp
5275 rdpr %pstate, %l0 5294 rdpr %pstate, %l0
5276 srl %sp, 0, %sp 5295 srl %sp, 0, %sp
5277 rdpr %pil, %i2 ! s = splx(level) 5296 rdpr %pil, %i2 ! s = splx(level)
5278 mov %i0, %o0 5297 mov %i0, %o0
5279 mov PIL_HIGH, %i3 5298 mov PIL_HIGH, %i3
5280 mov %g1, %l1 5299 mov %g1, %l1
5281 mov %g2, %l2 5300 mov %g2, %l2
5282 cmp %i3, %i2 5301 cmp %i3, %i2
5283 mov %g3, %l3 5302 mov %g3, %l3
5284 mov %g4, %l4 5303 mov %g4, %l4
5285 mov %g5, %l5 5304 mov %g5, %l5
5286 movle %icc, %i2, %i3 5305 movle %icc, %i2, %i3
5287 mov %g6, %l6 5306 mov %g6, %l6
5288 mov %g7, %l7 5307 mov %g7, %l7
5289 wrpr %i3, %g0, %pil 5308 wrpr %i3, %g0, %pil
5290 jmpl %i4, %o7 5309 jmpl %i4, %o7
5291 ! Enable 64-bit addresses for the prom 5310 ! Enable 64-bit addresses for the prom
5292#if defined(_LP64) 5311#if defined(_LP64)
5293 wrpr %g0, PSTATE_PROM, %pstate 5312 wrpr %g0, PSTATE_PROM, %pstate
5294#else 5313#else
5295 wrpr %g0, PSTATE_PROM|PSTATE_IE, %pstate 5314 wrpr %g0, PSTATE_PROM|PSTATE_IE, %pstate
5296#endif 5315#endif
5297 wrpr %l0, 0, %pstate 5316 wrpr %l0, 0, %pstate
5298 wrpr %i2, 0, %pil 5317 wrpr %i2, 0, %pil
5299 mov %l1, %g1 5318 mov %l1, %g1
5300 mov %l2, %g2 5319 mov %l2, %g2
5301 mov %l3, %g3 5320 mov %l3, %g3
5302 mov %l4, %g4 5321 mov %l4, %g4
5303 mov %l5, %g5 5322 mov %l5, %g5
5304 mov %l6, %g6 5323 mov %l6, %g6
5305 mov %l7, %g7 5324 mov %l7, %g7
5306 ret 5325 ret
5307 restore %o0, %g0, %o0 5326 restore %o0, %g0, %o0
5308 5327
5309/* 5328/*
5310 * void ofw_exit(cell_t args[]) 5329 * void ofw_exit(cell_t args[])
5311 */ 5330 */
5312ENTRY(openfirmware_exit) 5331ENTRY(openfirmware_exit)
5313 STACKFRAME(-CC64FSZ) 5332 STACKFRAME(-CC64FSZ)
5314 flushw ! Flush register windows 5333 flushw ! Flush register windows
5315 5334
5316 wrpr %g0, PIL_HIGH, %pil ! Disable interrupts 5335 wrpr %g0, PIL_HIGH, %pil ! Disable interrupts
5317 sethi %hi(romtba), %l5 5336 sethi %hi(romtba), %l5
5318 LDPTR [%l5 + %lo(romtba)], %l5 5337 LDPTR [%l5 + %lo(romtba)], %l5
5319 wrpr %l5, 0, %tba ! restore the ofw trap table 5338 wrpr %l5, 0, %tba ! restore the ofw trap table
5320 5339
5321 /* Arrange locked kernel stack as PROM stack */ 5340 /* Arrange locked kernel stack as PROM stack */
5322 set EINTSTACK - CC64FSZ, %l5 5341 set EINTSTACK - CC64FSZ, %l5
5323 5342
5324 andn %l5, 0x0f, %l5 ! Needs to be 16-byte aligned 5343 andn %l5, 0x0f, %l5 ! Needs to be 16-byte aligned
5325 sub %l5, BIAS, %l5 ! and biased 5344 sub %l5, BIAS, %l5 ! and biased
5326 mov %l5, %sp 5345 mov %l5, %sp
5327 flushw 5346 flushw
5328 5347
5329 sethi %hi(romp), %l6 5348 sethi %hi(romp), %l6
5330 LDPTR [%l6 + %lo(romp)], %l6 5349 LDPTR [%l6 + %lo(romp)], %l6
5331 5350
5332 mov CTX_PRIMARY, %l3 ! set context 0 5351 mov CTX_PRIMARY, %l3 ! set context 0
5333 stxa %g0, [%l3] ASI_DMMU 5352 stxa %g0, [%l3] ASI_DMMU
5334 membar #Sync 5353 membar #Sync
5335 5354
5336 wrpr %g0, PSTATE_PROM, %pstate ! Disable interrupts 5355 wrpr %g0, PSTATE_PROM, %pstate ! Disable interrupts
5337 ! and enable 64-bit addresses 5356 ! and enable 64-bit addresses
5338 wrpr %g0, 0, %tl ! force trap level 0 5357 wrpr %g0, 0, %tl ! force trap level 0
5339 call %l6 5358 call %l6
5340 mov %i0, %o0 5359 mov %i0, %o0
5341 NOTREACHED 5360 NOTREACHED
5342 5361
5343/* 5362/*
5344 * sp_tlb_flush_pte(vaddr_t va, int ctx) 5363 * sp_tlb_flush_pte(vaddr_t va, int ctx)
5345 * 5364 *
5346 * Flush tte from both IMMU and DMMU. 5365 * Flush tte from both IMMU and DMMU.
5347 * 5366 *
5348 * This uses %o0-%o5 5367 * This uses %o0-%o5
5349 */ 5368 */
5350 .align 8 5369 .align 8
5351ENTRY(sp_tlb_flush_pte) 5370ENTRY(sp_tlb_flush_pte)
5352#ifdef DEBUG 5371#ifdef DEBUG
5353 set DATA_START, %o4 ! Forget any recent TLB misses 5372 set DATA_START, %o4 ! Forget any recent TLB misses
5354 stx %g0, [%o4] 5373 stx %g0, [%o4]
5355 stx %g0, [%o4+16] 5374 stx %g0, [%o4+16]
5356#endif 5375#endif
5357#ifdef DEBUG 5376#ifdef DEBUG
5358 set pmapdebug, %o3 5377 set pmapdebug, %o3
5359 lduw [%o3], %o3 5378 lduw [%o3], %o3
5360! movrz %o1, -1, %o3 ! Print on either pmapdebug & PDB_DEMAP or ctx == 0 5379! movrz %o1, -1, %o3 ! Print on either pmapdebug & PDB_DEMAP or ctx == 0
5361 btst 0x0020, %o3 5380 btst 0x0020, %o3
5362 bz,pt %icc, 2f 5381 bz,pt %icc, 2f
5363 nop 5382 nop
5364 save %sp, -CC64FSZ, %sp 5383 save %sp, -CC64FSZ, %sp
5365 set 1f, %o0 5384 set 1f, %o0
5366 mov %i1, %o1 5385 mov %i1, %o1
5367 andn %i0, 0xfff, %o3 5386 andn %i0, 0xfff, %o3
5368 or %o3, 0x010, %o3 5387 or %o3, 0x010, %o3
5369 call _C_LABEL(printf) 5388 call _C_LABEL(printf)
5370 mov %i0, %o2 5389 mov %i0, %o2
5371 restore 5390 restore
5372 .data 5391 .data
53731: 53921:
5374 .asciz "sp_tlb_flush_pte: demap ctx=%x va=%08x res=%x\r\n" 5393 .asciz "sp_tlb_flush_pte: demap ctx=%x va=%08x res=%x\r\n"
5375 _ALIGN 5394 _ALIGN
5376 .text 5395 .text
53772: 53962:
5378#endif 5397#endif
5379#ifdef SPITFIRE 5398#ifdef SPITFIRE
5380#ifdef MULTIPROCESSOR 5399#ifdef MULTIPROCESSOR
5381 rdpr %pstate, %o3 5400 rdpr %pstate, %o3
5382 andn %o3, PSTATE_IE, %o4 ! disable interrupts 5401 andn %o3, PSTATE_IE, %o4 ! disable interrupts
5383 wrpr %o4, 0, %pstate 5402 wrpr %o4, 0, %pstate
5384#endif 5403#endif
5385 srlx %o0, PG_SHIFT4U, %o0 ! drop unused va bits 5404 srlx %o0, PG_SHIFT4U, %o0 ! drop unused va bits
5386 mov CTX_SECONDARY, %o2 5405 mov CTX_SECONDARY, %o2
5387 sllx %o0, PG_SHIFT4U, %o0 5406 sllx %o0, PG_SHIFT4U, %o0
5388 ldxa [%o2] ASI_DMMU, %o5 ! Save secondary context 5407 ldxa [%o2] ASI_DMMU, %o5 ! Save secondary context
5389 sethi %hi(KERNBASE), %o4 5408 sethi %hi(KERNBASE), %o4
5390 membar #LoadStore 5409 membar #LoadStore
5391 stxa %o1, [%o2] ASI_DMMU ! Insert context to demap 5410 stxa %o1, [%o2] ASI_DMMU ! Insert context to demap
5392 membar #Sync 5411 membar #Sync
5393 or %o0, DEMAP_PAGE_SECONDARY, %o0 ! Demap page from secondary context only 5412 or %o0, DEMAP_PAGE_SECONDARY, %o0 ! Demap page from secondary context only
5394 stxa %o0, [%o0] ASI_DMMU_DEMAP ! Do the demap 5413 stxa %o0, [%o0] ASI_DMMU_DEMAP ! Do the demap
5395 stxa %o0, [%o0] ASI_IMMU_DEMAP ! to both TLBs 5414 stxa %o0, [%o0] ASI_IMMU_DEMAP ! to both TLBs
5396#ifdef _LP64 5415#ifdef _LP64
5397 srl %o0, 0, %o0 ! and make sure it's both 32- and 64-bit entries 5416 srl %o0, 0, %o0 ! and make sure it's both 32- and 64-bit entries
5398 stxa %o0, [%o0] ASI_DMMU_DEMAP ! Do the demap 5417 stxa %o0, [%o0] ASI_DMMU_DEMAP ! Do the demap
5399 stxa %o0, [%o0] ASI_IMMU_DEMAP ! Do the demap 5418 stxa %o0, [%o0] ASI_IMMU_DEMAP ! Do the demap
5400#endif 5419#endif
5401 flush %o4 5420 flush %o4
5402 stxa %o5, [%o2] ASI_DMMU ! Restore secondary context 5421 stxa %o5, [%o2] ASI_DMMU ! Restore secondary context
5403 membar #Sync 5422 membar #Sync
5404 retl 5423 retl
5405#ifdef MULTIPROCESSOR 5424#ifdef MULTIPROCESSOR
5406 wrpr %o3, %pstate ! restore interrupts 5425 wrpr %o3, %pstate ! restore interrupts
5407#else 5426#else
5408 nop 5427 nop
5409#endif 5428#endif
5410#else 5429#else
5411#ifdef MULTIPROCESSOR 5430
5412 WRITEME 5431 ! %o0 = VA [in]
5413#endif 5432 ! %o1 = ctx value [in] / KERNBASE
 5433 ! %o2 = CTX_PRIMARY
 5434 ! %o3 = saved %tl
 5435 ! %o4 = saved %pstate
 5436 ! %o5 = saved primary ctx
 5437
 5438 ! Need this for UP as well
 5439 rdpr %pstate, %o4
 5440 andn %o4, PSTATE_IE, %o3 ! disable interrupts
 5441 wrpr %o3, 0, %pstate
 5442
5414 !! 5443 !!
5415 !! Cheetahs do not support flushing the IMMU from secondary context 5444 !! Cheetahs do not support flushing the IMMU from secondary context
5416 !! 5445 !!
5417 rdpr %tl, %o3 5446 rdpr %tl, %o3
5418 mov CTX_PRIMARY, %o2 5447 mov CTX_PRIMARY, %o2
5419 brnz,pt %o3, 1f 5448 brnz,pt %o3, 1f
5420 andn %o0, 0xfff, %o0 ! drop unused va bits 5449 andn %o0, 0xfff, %o0 ! drop unused va bits
5421 wrpr %g0, 1, %tl ! Make sure we're NUCLEUS 5450 wrpr %g0, 1, %tl ! Make sure we're NUCLEUS
54221:  54511:
5423 ldxa [%o2] ASI_DMMU, %o5 ! Save primary context 5452 ldxa [%o2] ASI_DMMU, %o5 ! Save primary context
5424 sethi %hi(KERNBASE), %o4 
5425 membar #LoadStore 5453 membar #LoadStore
5426 stxa %o1, [%o2] ASI_DMMU ! Insert context to demap 5454 stxa %o1, [%o2] ASI_DMMU ! Insert context to demap
 5455 sethi %hi(KERNBASE), %o1
5427 membar #Sync 5456 membar #Sync
5428 or %o0, DEMAP_PAGE_PRIMARY, %o0 5457 or %o0, DEMAP_PAGE_PRIMARY, %o0
5429 stxa %o0, [%o0] ASI_DMMU_DEMAP ! Do the demap 5458 stxa %o0, [%o0] ASI_DMMU_DEMAP ! Do the demap
5430 stxa %o0, [%o0] ASI_IMMU_DEMAP ! to both TLBs 5459 stxa %o0, [%o0] ASI_IMMU_DEMAP ! to both TLBs
 5460#ifdef _LP64
5431 srl %o0, 0, %o0 ! and make sure it's both 32- and 64-bit entries 5461 srl %o0, 0, %o0 ! and make sure it's both 32- and 64-bit entries
5432 stxa %o0, [%o0] ASI_DMMU_DEMAP ! Do the demap 5462 stxa %o0, [%o0] ASI_DMMU_DEMAP ! Do the demap
5433 stxa %o0, [%o0] ASI_IMMU_DEMAP ! Do the demap 5463 stxa %o0, [%o0] ASI_IMMU_DEMAP ! Do the demap
5434 flush %o4 5464#endif
 5465 flush %o1
5435 stxa %o5, [%o2] ASI_DMMU ! Restore primary context 5466 stxa %o5, [%o2] ASI_DMMU ! Restore primary context
5436 brz,pt %o3, 1f 5467 brz,pt %o3, 1f
5437 flush %o4 5468 flush %o1
5438 retl 5469 retl
5439 nop 5470 nop
54401:  54711:
 5472 wrpr %o4, %pstate ! restore interrupts
5441 retl 5473 retl
5442 wrpr %g0, %o3, %tl ! Return to kernel mode. 5474 wrpr %g0, %o3, %tl ! Return to kernel mode.
5443#endif 5475#endif
5444 5476
5445 5477
5446/* 5478/*
5447 * sp_tlb_flush_all(void) 5479 * sp_tlb_flush_all(void)
5448 * 5480 *
5449 * Flush all user TLB entries from both IMMU and DMMU. 5481 * Flush all user TLB entries from both IMMU and DMMU.
5450 */ 5482 */
5451 .align 8 5483 .align 8
5452ENTRY(sp_tlb_flush_all) 5484ENTRY(sp_tlb_flush_all)
5453#ifdef SPITFIRE 5485#ifdef SPITFIRE
5454 rdpr %pstate, %o3 5486 rdpr %pstate, %o3
5455 andn %o3, PSTATE_IE, %o4 ! disable interrupts 5487 andn %o3, PSTATE_IE, %o4 ! disable interrupts
5456 wrpr %o4, 0, %pstate 5488 wrpr %o4, 0, %pstate
5457 set (63 * 8), %o0 ! last TLB entry 5489 set (63 * 8), %o0 ! last TLB entry
5458 set CTX_SECONDARY, %o4 5490 set CTX_SECONDARY, %o4
5459 ldxa [%o4] ASI_DMMU, %o4 ! save secondary context 5491 ldxa [%o4] ASI_DMMU, %o4 ! save secondary context
5460 set CTX_MASK, %o5 5492 set CTX_MASK, %o5
5461 membar #Sync 5493 membar #Sync
5462 5494
5463 ! %o0 = loop counter 5495 ! %o0 = loop counter
5464 ! %o1 = ctx value 5496 ! %o1 = ctx value
5465 ! %o2 = TLB tag value 5497 ! %o2 = TLB tag value
5466 ! %o3 = saved %pstate 5498 ! %o3 = saved %pstate
5467 ! %o4 = saved primary ctx 5499 ! %o4 = saved primary ctx
5468 ! %o5 = CTX_MASK 5500 ! %o5 = CTX_MASK
5469 ! %xx = saved %tl 5501 ! %xx = saved %tl
5470 5502
54710: 55030:
5472 ldxa [%o0] ASI_DMMU_TLB_TAG, %o2 ! fetch the TLB tag 5504 ldxa [%o0] ASI_DMMU_TLB_TAG, %o2 ! fetch the TLB tag
5473 andcc %o2, %o5, %o1 ! context 0? 5505 andcc %o2, %o5, %o1 ! context 0?
5474 bz,pt %xcc, 1f ! if so, skip 5506 bz,pt %xcc, 1f ! if so, skip
5475 mov CTX_SECONDARY, %o2 5507 mov CTX_SECONDARY, %o2
5476 5508
5477 stxa %o1, [%o2] ASI_DMMU ! set the context 5509 stxa %o1, [%o2] ASI_DMMU ! set the context
5478 set DEMAP_CTX_SECONDARY, %o2 5510 set DEMAP_CTX_SECONDARY, %o2
5479 membar #Sync 5511 membar #Sync
5480 stxa %o2, [%o2] ASI_DMMU_DEMAP ! do the demap 5512 stxa %o2, [%o2] ASI_DMMU_DEMAP ! do the demap
5481 membar #Sync 5513 membar #Sync
5482 5514
54831: 55151:
5484 dec 8, %o0 5516 dec 8, %o0
5485 brgz,pt %o0, 0b ! loop over all entries 5517 brgz,pt %o0, 0b ! loop over all entries
5486 nop 5518 nop
5487 5519
5488/* 5520/*
5489 * now do the IMMU 5521 * now do the IMMU
5490 */ 5522 */
5491 5523
5492 set (63 * 8), %o0 ! last TLB entry 5524 set (63 * 8), %o0 ! last TLB entry
5493 5525
54940: 55260:
5495 ldxa [%o0] ASI_IMMU_TLB_TAG, %o2 ! fetch the TLB tag 5527 ldxa [%o0] ASI_IMMU_TLB_TAG, %o2 ! fetch the TLB tag
5496 andcc %o2, %o5, %o1 ! context 0? 5528 andcc %o2, %o5, %o1 ! context 0?
5497 bz,pt %xcc, 1f ! if so, skip 5529 bz,pt %xcc, 1f ! if so, skip
5498 mov CTX_SECONDARY, %o2 5530 mov CTX_SECONDARY, %o2
5499 5531
5500 stxa %o1, [%o2] ASI_DMMU ! set the context 5532 stxa %o1, [%o2] ASI_DMMU ! set the context
5501 set DEMAP_CTX_SECONDARY, %o2 5533 set DEMAP_CTX_SECONDARY, %o2
5502 membar #Sync 5534 membar #Sync
5503 stxa %o2, [%o2] ASI_IMMU_DEMAP ! do the demap 5535 stxa %o2, [%o2] ASI_IMMU_DEMAP ! do the demap
5504 membar #Sync 5536 membar #Sync
5505 5537
55061: 55381:
5507 dec 8, %o0 5539 dec 8, %o0
5508 brgz,pt %o0, 0b ! loop over all entries 5540 brgz,pt %o0, 0b ! loop over all entries
5509 nop 5541 nop
5510 5542
5511 set CTX_SECONDARY, %o2 5543 set CTX_SECONDARY, %o2
5512 stxa %o4, [%o2] ASI_DMMU ! restore secondary ctx 5544 stxa %o4, [%o2] ASI_DMMU ! restore secondary ctx
5513 sethi %hi(KERNBASE), %o4 5545 sethi %hi(KERNBASE), %o4
5514 membar #Sync 5546 membar #Sync
5515 flush %o4 5547 flush %o4
5516 retl 5548 retl
5517 wrpr %o3, %pstate 5549 wrpr %o3, %pstate
5518#else 5550#else
5519 ! XXX bump up %tl around this call always 5551 ! XXX bump up %tl around this call always
5520 rdpr %tl, %o4 5552 rdpr %tl, %o4
5521 inc %o4 5553 inc %o4
5522 wrpr %o4, 0, %tl 5554 wrpr %o4, 0, %tl
5523 5555
5524 rdpr %pstate, %o3 5556 rdpr %pstate, %o3
5525 andn %o3, PSTATE_IE, %o4 ! disable interrupts 5557 andn %o3, PSTATE_IE, %o4 ! disable interrupts
5526 wrpr %o4, 0, %pstate 5558 wrpr %o4, 0, %pstate
5527 set (63 * 8), %o0 ! last TLB entry 5559 set (63 * 8), %o0 ! last TLB entry
5528 set CTX_PRIMARY, %o4 5560 set CTX_PRIMARY, %o4
5529 ldxa [%o4] ASI_DMMU, %o4 ! save secondary context 5561 ldxa [%o4] ASI_DMMU, %o4 ! save secondary context
5530 set CTX_MASK, %o5 5562 set CTX_MASK, %o5
5531 membar #Sync 5563 membar #Sync
5532 5564
5533 ! %o0 = loop counter 5565 ! %o0 = loop counter
5534 ! %o1 = ctx value 5566 ! %o1 = ctx value
5535 ! %o2 = TLB tag value 5567 ! %o2 = TLB tag value
5536 ! %o3 = saved %pstate 5568 ! %o3 = saved %pstate
5537 ! %o4 = saved primary ctx 5569 ! %o4 = saved primary ctx
5538 ! %o5 = CTX_MASK 5570 ! %o5 = CTX_MASK
5539 ! %xx = saved %tl 5571 ! %xx = saved %tl
5540 5572
55410: 55730:
5542 ldxa [%o0] ASI_DMMU_TLB_TAG, %o2 ! fetch the TLB tag 5574 ldxa [%o0] ASI_DMMU_TLB_TAG, %o2 ! fetch the TLB tag
5543 andcc %o2, %o5, %o1 ! context 0? 5575 andcc %o2, %o5, %o1 ! context 0?
5544 bz,pt %xcc, 1f ! if so, skip 5576 bz,pt %xcc, 1f ! if so, skip
5545 mov CTX_PRIMARY, %o2 5577 mov CTX_PRIMARY, %o2
5546 5578
5547 stxa %o1, [%o2] ASI_DMMU ! set the context 5579 stxa %o1, [%o2] ASI_DMMU ! set the context
5548 set DEMAP_CTX_PRIMARY, %o2 5580 set DEMAP_CTX_PRIMARY, %o2
5549 membar #Sync 5581 membar #Sync
5550 stxa %o2, [%o2] ASI_DMMU_DEMAP ! do the demap 5582 stxa %o2, [%o2] ASI_DMMU_DEMAP ! do the demap
5551 membar #Sync 5583 membar #Sync
5552 5584
55531: 55851:
5554 dec 8, %o0 5586 dec 8, %o0
5555 brgz,pt %o0, 0b ! loop over all entries 5587 brgz,pt %o0, 0b ! loop over all entries
5556 nop 5588 nop
5557 5589
5558/* 5590/*
5559 * now do the IMMU 5591 * now do the IMMU
5560 */ 5592 */
5561 5593
5562 set (63 * 8), %o0 ! last TLB entry 5594 set (63 * 8), %o0 ! last TLB entry
5563 5595
55640: 55960:
5565 ldxa [%o0] ASI_IMMU_TLB_TAG, %o2 ! fetch the TLB tag 5597 ldxa [%o0] ASI_IMMU_TLB_TAG, %o2 ! fetch the TLB tag
5566 andcc %o2, %o5, %o1 ! context 0? 5598 andcc %o2, %o5, %o1 ! context 0?
5567 bz,pt %xcc, 1f ! if so, skip 5599 bz,pt %xcc, 1f ! if so, skip
5568 mov CTX_PRIMARY, %o2 5600 mov CTX_PRIMARY, %o2
5569 5601
5570 stxa %o1, [%o2] ASI_DMMU ! set the context 5602 stxa %o1, [%o2] ASI_DMMU ! set the context
5571 set DEMAP_CTX_PRIMARY, %o2 5603 set DEMAP_CTX_PRIMARY, %o2
5572 membar #Sync 5604 membar #Sync
5573 stxa %o2, [%o2] ASI_IMMU_DEMAP ! do the demap 5605 stxa %o2, [%o2] ASI_IMMU_DEMAP ! do the demap
5574 membar #Sync 5606 membar #Sync
5575 5607
55761: 56081:
5577 dec 8, %o0 5609 dec 8, %o0
5578 brgz,pt %o0, 0b ! loop over all entries 5610 brgz,pt %o0, 0b ! loop over all entries
5579 nop 5611 nop
5580 5612
5581 set CTX_PRIMARY, %o2 5613 set CTX_PRIMARY, %o2
5582 stxa %o4, [%o2] ASI_DMMU ! restore secondary ctx 5614 stxa %o4, [%o2] ASI_DMMU ! restore secondary ctx
5583 sethi %hi(KERNBASE), %o4 5615 sethi %hi(KERNBASE), %o4
5584 membar #Sync 5616 membar #Sync
5585 flush %o4 5617 flush %o4
5586 5618
5587 ! XXX bump up %tl around this call always 5619 ! XXX bump up %tl around this call always
5588 rdpr %tl, %o4 5620 rdpr %tl, %o4
5589 dec %o4 5621 dec %o4
5590 wrpr %o4, 0, %tl 5622 wrpr %o4, 0, %tl
5591 5623
5592 retl 5624 retl
5593 wrpr %o3, %pstate 5625 wrpr %o3, %pstate
5594 5626
5595#endif 5627#endif
5596 5628
5597/* 5629/*
5598 * blast_dcache() 5630 * blast_dcache()
5599 * 5631 *
5600 * Clear out all of D$ regardless of contents 5632 * Clear out all of D$ regardless of contents
5601 * Does not modify %o0 5633 * Does not modify %o0
5602 * 5634 *
5603 */ 5635 */
5604 .align 8 5636 .align 8
5605ENTRY(blast_dcache) 5637ENTRY(blast_dcache)
5606/* 5638/*
5607 * We turn off interrupts for the duration to prevent RED exceptions. 5639 * We turn off interrupts for the duration to prevent RED exceptions.
5608 */ 5640 */
5609#ifdef PROF 5641#ifdef PROF
5610 save %sp, -CC64FSZ, %sp 5642 save %sp, -CC64FSZ, %sp
5611#endif 5643#endif
5612 5644
5613 rdpr %pstate, %o3 5645 rdpr %pstate, %o3
5614 set (2 * NBPG) - 32, %o1 5646 set (2 * NBPG) - 32, %o1
5615 andn %o3, PSTATE_IE, %o4 ! Turn off PSTATE_IE bit 5647 andn %o3, PSTATE_IE, %o4 ! Turn off PSTATE_IE bit
5616 wrpr %o4, 0, %pstate 5648 wrpr %o4, 0, %pstate
56171: 56491:
5618#ifdef SPITFIRE 5650#ifdef SPITFIRE
5619 stxa %g0, [%o1] ASI_DCACHE_TAG 5651 stxa %g0, [%o1] ASI_DCACHE_TAG
5620#else 5652#else
5621 stxa %g0, [%o1] ASI_DCACHE_INVALIDATE 5653 stxa %g0, [%o1] ASI_DCACHE_INVALIDATE
5622#endif 5654#endif
5623 brnz,pt %o1, 1b 5655 brnz,pt %o1, 1b
5624 dec 32, %o1 5656 dec 32, %o1
5625 sethi %hi(KERNBASE), %o2 5657 sethi %hi(KERNBASE), %o2
5626 flush %o2 5658 flush %o2
5627 membar #Sync 5659 membar #Sync
5628#ifdef PROF 5660#ifdef PROF
5629 wrpr %o3, %pstate 5661 wrpr %o3, %pstate
5630 ret 5662 ret
5631 restore 5663 restore
5632#else 5664#else
5633 retl 5665 retl
5634 wrpr %o3, %pstate 5666 wrpr %o3, %pstate
5635#endif 5667#endif
5636 5668
5637/* 5669/*
5638 * blast_icache() 5670 * blast_icache()
5639 * 5671 *
5640 * Clear out all of I$ regardless of contents 5672 * Clear out all of I$ regardless of contents
5641 * Does not modify %o0 5673 * Does not modify %o0
5642 * 5674 *
5643 */ 5675 */
5644 .align 8 5676 .align 8
5645ENTRY(blast_icache) 5677ENTRY(blast_icache)
5646/* 5678/*
5647 * We turn off interrupts for the duration to prevent RED exceptions. 5679 * We turn off interrupts for the duration to prevent RED exceptions.
5648 */ 5680 */
5649 rdpr %pstate, %o3 5681 rdpr %pstate, %o3
5650 set (2 * NBPG) - 32, %o1 5682 set (2 * NBPG) - 32, %o1
5651 andn %o3, PSTATE_IE, %o4 ! Turn off PSTATE_IE bit 5683 andn %o3, PSTATE_IE, %o4 ! Turn off PSTATE_IE bit
5652 wrpr %o4, 0, %pstate 5684 wrpr %o4, 0, %pstate
56531: 56851:
5654 stxa %g0, [%o1] ASI_ICACHE_TAG 5686 stxa %g0, [%o1] ASI_ICACHE_TAG
5655 brnz,pt %o1, 1b 5687 brnz,pt %o1, 1b
5656 dec 32, %o1 5688 dec 32, %o1
5657 sethi %hi(KERNBASE), %o2 5689 sethi %hi(KERNBASE), %o2
5658 flush %o2 5690 flush %o2
5659 membar #Sync 5691 membar #Sync
5660 retl 5692 retl
5661 wrpr %o3, %pstate 5693 wrpr %o3, %pstate
5662 5694
5663/* 5695/*
5664 * dcache_flush_page(paddr_t pa) 5696 * dcache_flush_page(paddr_t pa)
5665 * 5697 *
5666 * Clear one page from D$. 5698 * Clear one page from D$.
5667 * 5699 *
5668 */ 5700 */
5669 .align 8 5701 .align 8
5670ENTRY(dcache_flush_page) 5702ENTRY(dcache_flush_page)
5671#ifndef _LP64 5703#ifndef _LP64
5672 COMBINE(%o0, %o1, %o0) 5704 COMBINE(%o0, %o1, %o0)
5673#endif 5705#endif
5674 mov -1, %o1 ! Generate mask for tag: bits [29..2] 5706 mov -1, %o1 ! Generate mask for tag: bits [29..2]
5675 srlx %o0, 13-2, %o2 ! Tag is PA bits <40:13> in bits <29:2> 5707 srlx %o0, 13-2, %o2 ! Tag is PA bits <40:13> in bits <29:2>
5676 clr %o4 5708 clr %o4
5677 srl %o1, 2, %o1 ! Now we have bits <29:0> set 5709 srl %o1, 2, %o1 ! Now we have bits <29:0> set
5678 set (2*NBPG), %o5 5710 set (2*NBPG), %o5
5679 ba,pt %icc, 1f 5711 ba,pt %icc, 1f
5680 andn %o1, 3, %o1 ! Now we have bits <29:2> set 5712 andn %o1, 3, %o1 ! Now we have bits <29:2> set
5681 5713
5682 .align 8 5714 .align 8
56831: 57151:
5684 ldxa [%o4] ASI_DCACHE_TAG, %o3 5716 ldxa [%o4] ASI_DCACHE_TAG, %o3
5685 mov %o4, %o0 5717 mov %o4, %o0
5686 deccc 32, %o5 5718 deccc 32, %o5
5687 bl,pn %icc, 2f 5719 bl,pn %icc, 2f
5688 inc 32, %o4 5720 inc 32, %o4
5689 5721
5690 xor %o3, %o2, %o3 5722 xor %o3, %o2, %o3
5691 andcc %o3, %o1, %g0 5723 andcc %o3, %o1, %g0
5692 bne,pt %xcc, 1b 5724 bne,pt %xcc, 1b
5693 membar #LoadStore 5725 membar #LoadStore
5694 5726
5695#ifdef SPITFIRE 5727#ifdef SPITFIRE
5696 stxa %g0, [%o0] ASI_DCACHE_TAG 5728 stxa %g0, [%o0] ASI_DCACHE_TAG
5697#else 5729#else
5698 stxa %g0, [%o0] ASI_DCACHE_INVALIDATE 5730 stxa %g0, [%o0] ASI_DCACHE_INVALIDATE
5699#endif 5731#endif
5700 ba,pt %icc, 1b 5732 ba,pt %icc, 1b
5701 membar #StoreLoad 5733 membar #StoreLoad
57022: 57342:
5703 5735
5704 sethi %hi(KERNBASE), %o5 5736 sethi %hi(KERNBASE), %o5
5705 flush %o5 5737 flush %o5
5706 retl 5738 retl
5707 membar #Sync 5739 membar #Sync
5708 5740
5709/* 5741/*
5710 * icache_flush_page(paddr_t pa) 5742 * icache_flush_page(paddr_t pa)
5711 * 5743 *
5712 * Clear one page from I$. 5744 * Clear one page from I$.
5713 * 5745 *
5714 */ 5746 */
5715 .align 8 5747 .align 8
5716ENTRY(icache_flush_page) 5748ENTRY(icache_flush_page)
5717#ifndef _LP64 5749#ifndef _LP64
5718 COMBINE(%o0, %o1, %o0) 5750 COMBINE(%o0, %o1, %o0)
5719#endif 5751#endif
5720 5752
5721#ifdef SPITFIRE 5753#ifdef SPITFIRE
5722 !! 5754 !!
5723 !! Linux sez that I$ flushes are not needed for cheetah. 5755 !! Linux sez that I$ flushes are not needed for cheetah.
5724 !! 5756 !!
5725  5757
5726 !! Now do the I$ 5758 !! Now do the I$
5727 srlx %o0, 13-8, %o2 5759 srlx %o0, 13-8, %o2
5728 mov -1, %o1 ! Generate mask for tag: bits [35..8] 5760 mov -1, %o1 ! Generate mask for tag: bits [35..8]
5729 srl %o1, 32-35+7, %o1 5761 srl %o1, 32-35+7, %o1
5730 clr %o4 5762 clr %o4
5731 sll %o1, 7, %o1 ! Mask 5763 sll %o1, 7, %o1 ! Mask
5732 set (2*NBPG), %o5 5764 set (2*NBPG), %o5
5733  5765
57341: 57661:
5735 ldda [%o4] ASI_ICACHE_TAG, %g0 ! Tag goes in %g1 5767 ldda [%o4] ASI_ICACHE_TAG, %g0 ! Tag goes in %g1
5736 dec 32, %o5 5768 dec 32, %o5
5737 xor %g1, %o2, %g1 5769 xor %g1, %o2, %g1
5738 andcc %g1, %o1, %g0 5770 andcc %g1, %o1, %g0
5739 bne,pt %xcc, 2f 5771 bne,pt %xcc, 2f
5740 membar #LoadStore 5772 membar #LoadStore
5741 stxa %g0, [%o4] ASI_ICACHE_TAG 5773 stxa %g0, [%o4] ASI_ICACHE_TAG
5742 membar #StoreLoad 5774 membar #StoreLoad
57432: 57752:
5744 brnz,pt %o5, 1b 5776 brnz,pt %o5, 1b
5745 inc 32, %o4 5777 inc 32, %o4
5746#endif 5778#endif
5747 sethi %hi(KERNBASE), %o5 5779 sethi %hi(KERNBASE), %o5
5748 flush %o5 5780 flush %o5
5749 membar #Sync 5781 membar #Sync
5750 retl 5782 retl
5751 nop 5783 nop
5752 5784
5753/* 5785/*
5754 * cache_flush_phys(paddr_t, psize_t, int); 5786 * cache_flush_phys(paddr_t, psize_t, int);
5755 * 5787 *
5756 * Clear a set of paddrs from the D$, I$ and if param3 is 5788 * Clear a set of paddrs from the D$, I$ and if param3 is
5757 * non-zero, E$. (E$ is not supported yet). 5789 * non-zero, E$. (E$ is not supported yet).
5758 */ 5790 */
5759 5791
5760 .align 8 5792 .align 8
5761ENTRY(cache_flush_phys) 5793ENTRY(cache_flush_phys)
5762#ifndef _LP64 5794#ifndef _LP64
5763 COMBINE(%o0, %o1, %o0) 5795 COMBINE(%o0, %o1, %o0)
5764 COMBINE(%o2, %o3, %o1) 5796 COMBINE(%o2, %o3, %o1)
5765 mov %o4, %o2 5797 mov %o4, %o2
5766#endif 5798#endif
5767#ifdef DEBUG 5799#ifdef DEBUG
5768 tst %o2 ! Want to clear E$? 5800 tst %o2 ! Want to clear E$?
5769 tnz 1 ! Error! 5801 tnz 1 ! Error!
5770#endif 5802#endif
5771 add %o0, %o1, %o1 ! End PA 5803 add %o0, %o1, %o1 ! End PA
5772 dec %o1 5804 dec %o1
5773 5805
5774 !! 5806 !!
5775 !! Both D$ and I$ tags match pa bits 40-13, but 5807 !! Both D$ and I$ tags match pa bits 40-13, but
5776 !! they are shifted different amounts. So we'll 5808 !! they are shifted different amounts. So we'll
5777 !! generate a mask for bits 40-13. 5809 !! generate a mask for bits 40-13.
5778 !! 5810 !!
5779 5811
5780 mov -1, %o2 ! Generate mask for tag: bits [40..13] 5812 mov -1, %o2 ! Generate mask for tag: bits [40..13]
5781 srl %o2, 5, %o2 ! 32-5 = [27..0] 5813 srl %o2, 5, %o2 ! 32-5 = [27..0]
5782 sllx %o2, 13, %o2 ! 27+13 = [40..13] 5814 sllx %o2, 13, %o2 ! 27+13 = [40..13]
5783 5815
5784 and %o2, %o0, %o0 ! Mask away uninteresting bits 5816 and %o2, %o0, %o0 ! Mask away uninteresting bits
5785 and %o2, %o1, %o1 ! (probably not necessary) 5817 and %o2, %o1, %o1 ! (probably not necessary)
5786 5818
5787 set (2*NBPG), %o5 5819 set (2*NBPG), %o5
5788 clr %o4 5820 clr %o4
57891: 58211:
5790 ldxa [%o4] ASI_DCACHE_TAG, %o3 5822 ldxa [%o4] ASI_DCACHE_TAG, %o3
5791#ifdef SPITFIRE 5823#ifdef SPITFIRE
5792 ldda [%o4] ASI_ICACHE_TAG, %g0 ! Tag goes in %g1 -- not on cheetah 5824 ldda [%o4] ASI_ICACHE_TAG, %g0 ! Tag goes in %g1 -- not on cheetah
5793#endif 5825#endif
5794 sllx %o3, 40-29, %o3 ! Shift D$ tag into place 5826 sllx %o3, 40-29, %o3 ! Shift D$ tag into place
5795 and %o3, %o2, %o3 ! Mask out trash 5827 and %o3, %o2, %o3 ! Mask out trash
5796 cmp %o0, %o3 5828 cmp %o0, %o3
5797 blt,pt %xcc, 2f ! Too low 5829 blt,pt %xcc, 2f ! Too low
5798#ifdef SPITFIRE 5830#ifdef SPITFIRE
5799 sllx %g1, 40-35, %g1 ! Shift I$ tag into place 5831 sllx %g1, 40-35, %g1 ! Shift I$ tag into place
5800#endif 5832#endif
5801 cmp %o1, %o3 5833 cmp %o1, %o3
5802 bgt,pt %xcc, 2f ! Too high 5834 bgt,pt %xcc, 2f ! Too high
5803 nop 5835 nop
5804 5836
5805 membar #LoadStore 5837 membar #LoadStore
5806#ifdef SPITFIRE 5838#ifdef SPITFIRE
5807 stxa %g0, [%o4] ASI_DCACHE_TAG ! Just right 5839 stxa %g0, [%o4] ASI_DCACHE_TAG ! Just right
5808#else 5840#else
5809 stxa %g0, [%o4] ASI_DCACHE_INVALIDATE ! Just right 5841 stxa %g0, [%o4] ASI_DCACHE_INVALIDATE ! Just right
5810#endif 5842#endif
58112: 58432:
5812#ifdef SPITFIRE 5844#ifdef SPITFIRE
5813 and %g1, %o2, %g1 ! Mask out trash 5845 and %g1, %o2, %g1 ! Mask out trash
5814 cmp %o0, %g1 5846 cmp %o0, %g1
5815 blt,pt %xcc, 3f 5847 blt,pt %xcc, 3f
5816 cmp %o1, %g1 5848 cmp %o1, %g1
5817 bgt,pt %xcc, 3f 5849 bgt,pt %xcc, 3f
5818 nop 5850 nop
5819 stxa %g0, [%o4] ASI_ICACHE_TAG 5851 stxa %g0, [%o4] ASI_ICACHE_TAG
58203: 58523:
5821#endif 5853#endif
5822 membar #StoreLoad 5854 membar #StoreLoad
5823 dec 32, %o5 5855 dec 32, %o5
5824 brgz,pt %o5, 1b 5856 brgz,pt %o5, 1b
5825 inc 32, %o4 5857 inc 32, %o4
5826 5858
5827 sethi %hi(KERNBASE), %o5 5859 sethi %hi(KERNBASE), %o5
5828 flush %o5 5860 flush %o5
5829 membar #Sync 5861 membar #Sync
5830 retl 5862 retl
5831 nop 5863 nop
5832 5864
5833#ifdef COMPAT_16 5865#ifdef COMPAT_16
5834#ifdef _LP64 5866#ifdef _LP64
5835/* 5867/*
5836 * XXXXX Still needs lotsa cleanup after sendsig is complete and offsets are known 5868 * XXXXX Still needs lotsa cleanup after sendsig is complete and offsets are known
5837 * 5869 *
5838 * The following code is copied to the top of the user stack when each 5870 * The following code is copied to the top of the user stack when each
5839 * process is exec'ed, and signals are `trampolined' off it. 5871 * process is exec'ed, and signals are `trampolined' off it.
5840 * 5872 *
5841 * When this code is run, the stack looks like: 5873 * When this code is run, the stack looks like:
5842 * [%sp] 128 bytes to which registers can be dumped 5874 * [%sp] 128 bytes to which registers can be dumped
5843 * [%sp + 128] signal number (goes in %o0) 5875 * [%sp + 128] signal number (goes in %o0)
5844 * [%sp + 128 + 4] signal code (goes in %o1) 5876 * [%sp + 128 + 4] signal code (goes in %o1)
5845 * [%sp + 128 + 8] first word of saved state (sigcontext) 5877 * [%sp + 128 + 8] first word of saved state (sigcontext)
5846 * . 5878 * .
5847 * . 5879 * .
5848 * . 5880 * .
5849 * [%sp + NNN] last word of saved state 5881 * [%sp + NNN] last word of saved state
5850 * (followed by previous stack contents or top of signal stack). 5882 * (followed by previous stack contents or top of signal stack).
5851 * The address of the function to call is in %g1; the old %g1 and %o0 5883 * The address of the function to call is in %g1; the old %g1 and %o0
5852 * have already been saved in the sigcontext. We are running in a clean 5884 * have already been saved in the sigcontext. We are running in a clean
5853 * window, all previous windows now being saved to the stack. 5885 * window, all previous windows now being saved to the stack.
5854 * 5886 *
5855 * Note that [%sp + 128 + 8] == %sp + 128 + 16. The copy at %sp+128+8 5887 * Note that [%sp + 128 + 8] == %sp + 128 + 16. The copy at %sp+128+8
5856 * will eventually be removed, with a hole left in its place, if things 5888 * will eventually be removed, with a hole left in its place, if things
5857 * work out. 5889 * work out.
5858 */ 5890 */
5859ENTRY_NOPROFILE(sigcode) 5891ENTRY_NOPROFILE(sigcode)
5860 /* 5892 /*
5861 * XXX the `save' and `restore' below are unnecessary: should 5893 * XXX the `save' and `restore' below are unnecessary: should
5862 * replace with simple arithmetic on %sp 5894 * replace with simple arithmetic on %sp
5863 * 5895 *
5864 * Make room on the stack for 64 %f registers + %fsr. This comes 5896 * Make room on the stack for 64 %f registers + %fsr. This comes
5865 * out to 64*4+8 or 264 bytes, but this must be aligned to a multiple 5897 * out to 64*4+8 or 264 bytes, but this must be aligned to a multiple
5866 * of 64, or 320 bytes. 5898 * of 64, or 320 bytes.
5867 */ 5899 */
5868 save %sp, -CC64FSZ - 320, %sp 5900 save %sp, -CC64FSZ - 320, %sp
5869 mov %g2, %l2 ! save globals in %l registers 5901 mov %g2, %l2 ! save globals in %l registers
5870 mov %g3, %l3 5902 mov %g3, %l3
5871 mov %g4, %l4 5903 mov %g4, %l4
5872 mov %g5, %l5 5904 mov %g5, %l5
5873 mov %g6, %l6 5905 mov %g6, %l6
5874 mov %g7, %l7 5906 mov %g7, %l7
5875 /* 5907 /*
5876 * Saving the fpu registers is expensive, so do it iff it is 5908 * Saving the fpu registers is expensive, so do it iff it is
5877 * enabled and dirty. 5909 * enabled and dirty.
5878 */ 5910 */
5879 rd %fprs, %l0 5911 rd %fprs, %l0
5880 btst FPRS_DL|FPRS_DU, %l0 ! All clean? 5912 btst FPRS_DL|FPRS_DU, %l0 ! All clean?
5881 bz,pt %icc, 2f 5913 bz,pt %icc, 2f
5882 btst FPRS_DL, %l0 ! test dl 5914 btst FPRS_DL, %l0 ! test dl
5883 bz,pt %icc, 1f 5915 bz,pt %icc, 1f
5884 btst FPRS_DU, %l0 ! test du 5916 btst FPRS_DU, %l0 ! test du
5885 5917
5886 ! fpu is enabled, oh well 5918 ! fpu is enabled, oh well
5887 stx %fsr, [%sp + CC64FSZ + BIAS + 0] 5919 stx %fsr, [%sp + CC64FSZ + BIAS + 0]
5888 add %sp, BIAS+CC64FSZ+BLOCK_SIZE, %l0 ! Generate a pointer so we can 5920 add %sp, BIAS+CC64FSZ+BLOCK_SIZE, %l0 ! Generate a pointer so we can
5889 andn %l0, BLOCK_ALIGN, %l0 ! do a block store 5921 andn %l0, BLOCK_ALIGN, %l0 ! do a block store
5890 stda %f0, [%l0] ASI_BLK_P 5922 stda %f0, [%l0] ASI_BLK_P
5891 inc BLOCK_SIZE, %l0 5923 inc BLOCK_SIZE, %l0
5892 stda %f16, [%l0] ASI_BLK_P 5924 stda %f16, [%l0] ASI_BLK_P
58931: 59251:
5894 bz,pt %icc, 2f 5926 bz,pt %icc, 2f
5895 add %sp, BIAS+CC64FSZ+BLOCK_SIZE, %l0 ! Generate a pointer so we can 5927 add %sp, BIAS+CC64FSZ+BLOCK_SIZE, %l0 ! Generate a pointer so we can
5896 andn %l0, BLOCK_ALIGN, %l0 ! do a block store 5928 andn %l0, BLOCK_ALIGN, %l0 ! do a block store
5897 add %l0, 2*BLOCK_SIZE, %l0 ! and skip what we already stored 5929 add %l0, 2*BLOCK_SIZE, %l0 ! and skip what we already stored
5898 stda %f32, [%l0] ASI_BLK_P 5930 stda %f32, [%l0] ASI_BLK_P
5899 inc BLOCK_SIZE, %l0 5931 inc BLOCK_SIZE, %l0
5900 stda %f48, [%l0] ASI_BLK_P 5932 stda %f48, [%l0] ASI_BLK_P
59012: 59332:
5902 membar #Sync 5934 membar #Sync
5903 rd %fprs, %l0 ! reload fprs copy, for checking after 5935 rd %fprs, %l0 ! reload fprs copy, for checking after
5904 rd %y, %l1 ! in any case, save %y 5936 rd %y, %l1 ! in any case, save %y
5905 lduw [%fp + BIAS + 128], %o0 ! sig 5937 lduw [%fp + BIAS + 128], %o0 ! sig
5906 lduw [%fp + BIAS + 128 + 4], %o1 ! code 5938 lduw [%fp + BIAS + 128 + 4], %o1 ! code
5907 call %g1 ! (*sa->sa_handler)(sig,code,scp) 5939 call %g1 ! (*sa->sa_handler)(sig,code,scp)
5908 add %fp, BIAS + 128 + 8, %o2 ! scp 5940 add %fp, BIAS + 128 + 8, %o2 ! scp
5909 wr %l1, %g0, %y ! in any case, restore %y 5941 wr %l1, %g0, %y ! in any case, restore %y
5910 5942
5911 /* 5943 /*
5912 * Now that the handler has returned, re-establish all the state 5944 * Now that the handler has returned, re-establish all the state
5913 * we just saved above, then do a sigreturn. 5945 * we just saved above, then do a sigreturn.
5914 */ 5946 */
5915 btst FPRS_DL|FPRS_DU, %l0 ! All clean? 5947 btst FPRS_DL|FPRS_DU, %l0 ! All clean?
5916 bz,pt %icc, 2f 5948 bz,pt %icc, 2f
5917 btst FPRS_DL, %l0 ! test dl 5949 btst FPRS_DL, %l0 ! test dl
5918 bz,pt %icc, 1f 5950 bz,pt %icc, 1f
5919 btst FPRS_DU, %l0 ! test du 5951 btst FPRS_DU, %l0 ! test du
5920 5952
5921 ldx [%sp + CC64FSZ + BIAS + 0], %fsr 5953 ldx [%sp + CC64FSZ + BIAS + 0], %fsr
5922 add %sp, BIAS+CC64FSZ+BLOCK_SIZE, %l0 ! Generate a pointer so we can 5954 add %sp, BIAS+CC64FSZ+BLOCK_SIZE, %l0 ! Generate a pointer so we can
5923 andn %l0, BLOCK_ALIGN, %l0 ! do a block load 5955 andn %l0, BLOCK_ALIGN, %l0 ! do a block load
5924 ldda [%l0] ASI_BLK_P, %f0 5956 ldda [%l0] ASI_BLK_P, %f0
5925 inc BLOCK_SIZE, %l0 5957 inc BLOCK_SIZE, %l0
5926 ldda [%l0] ASI_BLK_P, %f16 5958 ldda [%l0] ASI_BLK_P, %f16
59271: 59591:
5928 bz,pt %icc, 2f 5960 bz,pt %icc, 2f
5929 nop 5961 nop
5930 add %sp, BIAS+CC64FSZ+BLOCK_SIZE, %l0 ! Generate a pointer so we can 5962 add %sp, BIAS+CC64FSZ+BLOCK_SIZE, %l0 ! Generate a pointer so we can
5931 andn %l0, BLOCK_ALIGN, %l0 ! do a block load 5963 andn %l0, BLOCK_ALIGN, %l0 ! do a block load
5932 inc 2*BLOCK_SIZE, %l0 ! and skip what we already loaded 5964 inc 2*BLOCK_SIZE, %l0 ! and skip what we already loaded
5933 ldda [%l0] ASI_BLK_P, %f32 5965 ldda [%l0] ASI_BLK_P, %f32
5934 inc BLOCK_SIZE, %l0 5966 inc BLOCK_SIZE, %l0
5935 ldda [%l0] ASI_BLK_P, %f48 5967 ldda [%l0] ASI_BLK_P, %f48
59362: 59682:
5937 mov %l2, %g2 5969 mov %l2, %g2
5938 mov %l3, %g3 5970 mov %l3, %g3
5939 mov %l4, %g4 5971 mov %l4, %g4
5940 mov %l5, %g5 5972 mov %l5, %g5
5941 mov %l6, %g6 5973 mov %l6, %g6
5942 mov %l7, %g7 5974 mov %l7, %g7
5943 membar #Sync 5975 membar #Sync
5944 5976
5945 restore %g0, SYS_compat_16___sigreturn14, %g1 ! get registers back & set syscall # 5977 restore %g0, SYS_compat_16___sigreturn14, %g1 ! get registers back & set syscall #
5946 add %sp, BIAS + 128 + 8, %o0! compute scp 5978 add %sp, BIAS + 128 + 8, %o0! compute scp
5947! andn %o0, 0x0f, %o0 5979! andn %o0, 0x0f, %o0
5948 t ST_SYSCALL ! sigreturn(scp) 5980 t ST_SYSCALL ! sigreturn(scp)
5949 ! sigreturn does not return unless it fails 5981 ! sigreturn does not return unless it fails
5950 mov SYS_exit, %g1 ! exit(errno) 5982 mov SYS_exit, %g1 ! exit(errno)
5951 t ST_SYSCALL 5983 t ST_SYSCALL
5952 /* NOTREACHED */ 5984 /* NOTREACHED */
5953 5985
5954 .globl _C_LABEL(esigcode) 5986 .globl _C_LABEL(esigcode)
5955_C_LABEL(esigcode): 5987_C_LABEL(esigcode):
5956#endif 5988#endif
5957 5989
5958#if !defined(_LP64) 5990#if !defined(_LP64)
5959 5991
5960#define SIGCODE_NAME sigcode 5992#define SIGCODE_NAME sigcode
5961#define ESIGCODE_NAME esigcode 5993#define ESIGCODE_NAME esigcode
5962#define SIGRETURN_NAME SYS_compat_16___sigreturn14 5994#define SIGRETURN_NAME SYS_compat_16___sigreturn14
5963#define EXIT_NAME SYS_exit 5995#define EXIT_NAME SYS_exit
5964 5996
5965#include "sigcode32.s" 5997#include "sigcode32.s"
5966 5998
5967#endif 5999#endif
5968#endif 6000#endif
5969 6001
5970/* 6002/*
5971 * Primitives 6003 * Primitives
5972 */ 6004 */
5973#ifdef ENTRY 6005#ifdef ENTRY
5974#undef ENTRY 6006#undef ENTRY
5975#endif 6007#endif
5976 6008
5977#ifdef GPROF 6009#ifdef GPROF
5978 .globl _mcount 6010 .globl _mcount
5979#define ENTRY(x) \ 6011#define ENTRY(x) \
5980 .globl _C_LABEL(x); .proc 1; .type _C_LABEL(x),@function; \ 6012 .globl _C_LABEL(x); .proc 1; .type _C_LABEL(x),@function; \
5981_C_LABEL(x): ; \ 6013_C_LABEL(x): ; \
5982 .data; \ 6014 .data; \
5983 .align 8; \ 6015 .align 8; \
59840: .uaword 0; .uaword 0; \ 60160: .uaword 0; .uaword 0; \
5985 .text; \ 6017 .text; \
5986 save %sp, -CC64FSZ, %sp; \ 6018 save %sp, -CC64FSZ, %sp; \
5987 sethi %hi(0b), %o0; \ 6019 sethi %hi(0b), %o0; \
5988 call _mcount; \ 6020 call _mcount; \
5989 or %o0, %lo(0b), %o0; \ 6021 or %o0, %lo(0b), %o0; \
5990 restore 6022 restore
5991#else 6023#else
5992#define ENTRY(x) .globl _C_LABEL(x); .proc 1; \ 6024#define ENTRY(x) .globl _C_LABEL(x); .proc 1; \
5993 .type _C_LABEL(x),@function; _C_LABEL(x): 6025 .type _C_LABEL(x),@function; _C_LABEL(x):
5994#endif 6026#endif
5995#define ALTENTRY(x) .globl _C_LABEL(x); _C_LABEL(x): 6027#define ALTENTRY(x) .globl _C_LABEL(x); _C_LABEL(x):
5996 6028
5997/* 6029/*
5998 * getfp() - get stack frame pointer 6030 * getfp() - get stack frame pointer
5999 */ 6031 */
6000ENTRY(getfp) 6032ENTRY(getfp)
6001 retl 6033 retl
6002 mov %fp, %o0 6034 mov %fp, %o0
6003 6035
6004/* 6036/*
6005 * copyinstr(fromaddr, toaddr, maxlength, &lencopied) 6037 * copyinstr(fromaddr, toaddr, maxlength, &lencopied)
6006 * 6038 *
6007 * Copy a null terminated string from the user address space into 6039 * Copy a null terminated string from the user address space into
6008 * the kernel address space. 6040 * the kernel address space.
6009 */ 6041 */
6010ENTRY(copyinstr) 6042ENTRY(copyinstr)
6011 ! %o0 = fromaddr, %o1 = toaddr, %o2 = maxlen, %o3 = &lencopied 6043 ! %o0 = fromaddr, %o1 = toaddr, %o2 = maxlen, %o3 = &lencopied
6012#ifdef NOTDEF_DEBUG 6044#ifdef NOTDEF_DEBUG
6013 save %sp, -CC64FSZ, %sp 6045 save %sp, -CC64FSZ, %sp
6014 set 8f, %o0 6046 set 8f, %o0
6015 mov %i0, %o1 6047 mov %i0, %o1
6016 mov %i1, %o2 6048 mov %i1, %o2
6017 mov %i2, %o3 6049 mov %i2, %o3
6018 call printf 6050 call printf
6019 mov %i3, %o4 6051 mov %i3, %o4
6020 restore 6052 restore
6021 .data 6053 .data
60228: .asciz "copyinstr: from=%x to=%x max=%x &len=%x\n" 60548: .asciz "copyinstr: from=%x to=%x max=%x &len=%x\n"
6023 _ALIGN 6055 _ALIGN
6024 .text 6056 .text
6025#endif 6057#endif
6026 brgz,pt %o2, 1f ! Make sure len is valid 6058 brgz,pt %o2, 1f ! Make sure len is valid
6027 sethi %hi(CPCB), %o4 ! (first instr of copy) 6059 sethi %hi(CPCB), %o4 ! (first instr of copy)
6028 retl 6060 retl
6029 mov ENAMETOOLONG, %o0 6061 mov ENAMETOOLONG, %o0
60301: 60621:
6031 LDPTR [%o4 + %lo(CPCB)], %o4 ! catch faults 6063 LDPTR [%o4 + %lo(CPCB)], %o4 ! catch faults
6032 set Lcsfault, %o5 6064 set Lcsfault, %o5
6033 membar #Sync 6065 membar #Sync
6034 STPTR %o5, [%o4 + PCB_ONFAULT] 6066 STPTR %o5, [%o4 + PCB_ONFAULT]
6035 6067
6036 mov %o1, %o5 ! save = toaddr; 6068 mov %o1, %o5 ! save = toaddr;
6037! XXX should do this in bigger chunks when possible 6069! XXX should do this in bigger chunks when possible
60380: ! loop: 60700: ! loop:
6039 ldsba [%o0] ASI_AIUS, %g1 ! c = *fromaddr; 6071 ldsba [%o0] ASI_AIUS, %g1 ! c = *fromaddr;
6040 stb %g1, [%o1] ! *toaddr++ = c; 6072 stb %g1, [%o1] ! *toaddr++ = c;
6041 inc %o1 6073 inc %o1
6042 brz,a,pn %g1, Lcsdone ! if (c == NULL) 6074 brz,a,pn %g1, Lcsdone ! if (c == NULL)
6043 clr %o0 ! { error = 0; done; } 6075 clr %o0 ! { error = 0; done; }
6044 deccc %o2 ! if (--len > 0) { 6076 deccc %o2 ! if (--len > 0) {
6045 bg,pt %icc, 0b ! fromaddr++; 6077 bg,pt %icc, 0b ! fromaddr++;
6046 inc %o0 ! goto loop; 6078 inc %o0 ! goto loop;
6047 ba,pt %xcc, Lcsdone ! } 6079 ba,pt %xcc, Lcsdone ! }
6048 mov ENAMETOOLONG, %o0 ! error = ENAMETOOLONG; 6080 mov ENAMETOOLONG, %o0 ! error = ENAMETOOLONG;
6049 NOTREACHED 6081 NOTREACHED
6050 6082
6051/* 6083/*
6052 * copyoutstr(fromaddr, toaddr, maxlength, &lencopied) 6084 * copyoutstr(fromaddr, toaddr, maxlength, &lencopied)
6053 * 6085 *
6054 * Copy a null terminated string from the kernel 6086 * Copy a null terminated string from the kernel
6055 * address space to the user address space. 6087 * address space to the user address space.
6056 */ 6088 */
6057ENTRY(copyoutstr) 6089ENTRY(copyoutstr)
6058 ! %o0 = fromaddr, %o1 = toaddr, %o2 = maxlen, %o3 = &lencopied 6090 ! %o0 = fromaddr, %o1 = toaddr, %o2 = maxlen, %o3 = &lencopied
6059#ifdef NOTDEF_DEBUG 6091#ifdef NOTDEF_DEBUG
6060 save %sp, -CC64FSZ, %sp 6092 save %sp, -CC64FSZ, %sp
6061 set 8f, %o0 6093 set 8f, %o0
6062 mov %i0, %o1 6094 mov %i0, %o1
6063 mov %i1, %o2 6095 mov %i1, %o2
6064 mov %i2, %o3 6096 mov %i2, %o3
6065 call printf 6097 call printf
6066 mov %i3, %o4 6098 mov %i3, %o4
6067 restore 6099 restore
6068 .data 6100 .data
60698: .asciz "copyoutstr: from=%x to=%x max=%x &len=%x\n" 61018: .asciz "copyoutstr: from=%x to=%x max=%x &len=%x\n"
6070 _ALIGN 6102 _ALIGN
6071 .text 6103 .text
6072#endif 6104#endif
6073 brgz,pt %o2, 1f ! Make sure len is valid 6105 brgz,pt %o2, 1f ! Make sure len is valid
6074 sethi %hi(CPCB), %o4 ! (first instr of copy) 6106 sethi %hi(CPCB), %o4 ! (first instr of copy)
6075 retl 6107 retl
6076 mov ENAMETOOLONG, %o0 6108 mov ENAMETOOLONG, %o0
60771: 61091:
6078 LDPTR [%o4 + %lo(CPCB)], %o4 ! catch faults 6110 LDPTR [%o4 + %lo(CPCB)], %o4 ! catch faults
6079 set Lcsfault, %o5 6111 set Lcsfault, %o5
6080 membar #Sync 6112 membar #Sync
6081 STPTR %o5, [%o4 + PCB_ONFAULT] 6113 STPTR %o5, [%o4 + PCB_ONFAULT]
6082 6114
6083 mov %o1, %o5 ! save = toaddr; 6115 mov %o1, %o5 ! save = toaddr;
6084! XXX should do this in bigger chunks when possible 6116! XXX should do this in bigger chunks when possible
60850: ! loop: 61170: ! loop:
6086 ldsb [%o0], %g1 ! c = *fromaddr; 6118 ldsb [%o0], %g1 ! c = *fromaddr;
6087 stba %g1, [%o1] ASI_AIUS ! *toaddr++ = c; 6119 stba %g1, [%o1] ASI_AIUS ! *toaddr++ = c;
6088 inc %o1 6120 inc %o1
6089 brz,a,pn %g1, Lcsdone ! if (c == NULL) 6121 brz,a,pn %g1, Lcsdone ! if (c == NULL)
6090 clr %o0 ! { error = 0; done; } 6122 clr %o0 ! { error = 0; done; }
6091 deccc %o2 ! if (--len > 0) { 6123 deccc %o2 ! if (--len > 0) {
6092 bg,pt %icc, 0b ! fromaddr++; 6124 bg,pt %icc, 0b ! fromaddr++;
6093 inc %o0 ! goto loop; 6125 inc %o0 ! goto loop;
6094 ! } 6126 ! }
6095 mov ENAMETOOLONG, %o0 ! error = ENAMETOOLONG; 6127 mov ENAMETOOLONG, %o0 ! error = ENAMETOOLONG;
6096Lcsdone: ! done: 6128Lcsdone: ! done:
6097 sub %o1, %o5, %o1 ! len = to - save; 6129 sub %o1, %o5, %o1 ! len = to - save;
6098 brnz,a %o3, 1f ! if (lencopied) 6130 brnz,a %o3, 1f ! if (lencopied)
6099 STPTR %o1, [%o3] ! *lencopied = len; 6131 STPTR %o1, [%o3] ! *lencopied = len;
61001: 61321:
6101 retl ! cpcb->pcb_onfault = 0; 6133 retl ! cpcb->pcb_onfault = 0;
6102 STPTR %g0, [%o4 + PCB_ONFAULT]! return (error); 6134 STPTR %g0, [%o4 + PCB_ONFAULT]! return (error);
6103 6135
6104Lcsfault: 6136Lcsfault:
6105#ifdef NOTDEF_DEBUG 6137#ifdef NOTDEF_DEBUG
6106 save %sp, -CC64FSZ, %sp 6138 save %sp, -CC64FSZ, %sp
6107 set 5f, %o0 6139 set 5f, %o0
6108 call printf 6140 call printf
6109 nop 6141 nop
6110 restore 6142 restore
6111 .data 6143 .data
61125: .asciz "Lcsfault: recovering\n" 61445: .asciz "Lcsfault: recovering\n"
6113 _ALIGN 6145 _ALIGN
6114 .text 6146 .text
6115#endif 6147#endif
6116 b Lcsdone ! error = EFAULT; 6148 b Lcsdone ! error = EFAULT;
6117 mov EFAULT, %o0 ! goto ret; 6149 mov EFAULT, %o0 ! goto ret;
6118 6150
6119/* 6151/*
6120 * copystr(fromaddr, toaddr, maxlength, &lencopied) 6152 * copystr(fromaddr, toaddr, maxlength, &lencopied)
6121 * 6153 *
6122 * Copy a null terminated string from one point to another in 6154 * Copy a null terminated string from one point to another in
6123 * the kernel address space. (This is a leaf procedure, but 6155 * the kernel address space. (This is a leaf procedure, but
6124 * it does not seem that way to the C compiler.) 6156 * it does not seem that way to the C compiler.)
6125 */ 6157 */
6126ENTRY(copystr) 6158ENTRY(copystr)
6127 brgz,pt %o2, 0f ! Make sure len is valid 6159 brgz,pt %o2, 0f ! Make sure len is valid
6128 mov %o1, %o5 ! to0 = to; 6160 mov %o1, %o5 ! to0 = to;
6129 retl 6161 retl
6130 mov ENAMETOOLONG, %o0 6162 mov ENAMETOOLONG, %o0
61310: ! loop: 61630: ! loop:
6132 ldsb [%o0], %o4 ! c = *from; 6164 ldsb [%o0], %o4 ! c = *from;
6133 tst %o4 6165 tst %o4
6134 stb %o4, [%o1] ! *to++ = c; 6166 stb %o4, [%o1] ! *to++ = c;
6135 be 1f ! if (c == 0) 6167 be 1f ! if (c == 0)
6136 inc %o1 ! goto ok; 6168 inc %o1 ! goto ok;
6137 deccc %o2 ! if (--len > 0) { 6169 deccc %o2 ! if (--len > 0) {
6138 bg,a 0b ! from++; 6170 bg,a 0b ! from++;
6139 inc %o0 ! goto loop; 6171 inc %o0 ! goto loop;
6140 b 2f ! } 6172 b 2f ! }
6141 mov ENAMETOOLONG, %o0 ! ret = ENAMETOOLONG; goto done; 6173 mov ENAMETOOLONG, %o0 ! ret = ENAMETOOLONG; goto done;
61421: ! ok: 61741: ! ok:
6143 clr %o0 ! ret = 0; 6175 clr %o0 ! ret = 0;
61442: 61762:
6145 sub %o1, %o5, %o1 ! len = to - to0; 6177 sub %o1, %o5, %o1 ! len = to - to0;
6146 tst %o3 ! if (lencopied) 6178 tst %o3 ! if (lencopied)
6147 bnz,a 3f 6179 bnz,a 3f
6148 STPTR %o1, [%o3] ! *lencopied = len; 6180 STPTR %o1, [%o3] ! *lencopied = len;
61493: 61813:
6150 retl 6182 retl
6151 nop 6183 nop
6152#ifdef DIAGNOSTIC 6184#ifdef DIAGNOSTIC
61534: 61854:
6154 sethi %hi(5f), %o0 6186 sethi %hi(5f), %o0
6155 call _C_LABEL(panic) 6187 call _C_LABEL(panic)
6156 or %lo(5f), %o0, %o0 6188 or %lo(5f), %o0, %o0
6157 .data 6189 .data
61585: 61905:
6159 .asciz "copystr" 6191 .asciz "copystr"
6160 _ALIGN 6192 _ALIGN
6161 .text 6193 .text
6162#endif 6194#endif
6163 6195
6164/* 6196/*
6165 * copyin(src, dst, len) 6197 * copyin(src, dst, len)
6166 * 6198 *
6167 * Copy specified amount of data from user space into the kernel. 6199 * Copy specified amount of data from user space into the kernel.
6168 * 6200 *
6169 * This is a modified version of memcpy that uses ASI_AIUS. When 6201 * This is a modified version of memcpy that uses ASI_AIUS. When
6170 * memcpy is optimized to use block copy ASIs, this should be also. 6202 * memcpy is optimized to use block copy ASIs, this should be also.
6171 */ 6203 */
6172 6204
6173#define BCOPY_SMALL 32 /* if < 32, copy by bytes */ 6205#define BCOPY_SMALL 32 /* if < 32, copy by bytes */
6174 6206
6175ENTRY(copyin) 6207ENTRY(copyin)
6176! flushw ! Make sure we don't have stack probs & lose hibits of %o 6208! flushw ! Make sure we don't have stack probs & lose hibits of %o
6177#ifdef NOTDEF_DEBUG 6209#ifdef NOTDEF_DEBUG
6178 save %sp, -CC64FSZ, %sp 6210 save %sp, -CC64FSZ, %sp
6179 set 1f, %o0 6211 set 1f, %o0
6180 mov %i0, %o1 6212 mov %i0, %o1
6181 mov %i1, %o2 6213 mov %i1, %o2
6182 call printf 6214 call printf
6183 mov %i2, %o3 6215 mov %i2, %o3
6184 restore 6216 restore
6185 .data 6217 .data
61861: .asciz "copyin: src=%x dest=%x len=%x\n" 62181: .asciz "copyin: src=%x dest=%x len=%x\n"
6187 _ALIGN 6219 _ALIGN
6188 .text 6220 .text
6189#endif 6221#endif
6190 sethi %hi(CPCB), %o3 6222 sethi %hi(CPCB), %o3
6191 wr %g0, ASI_AIUS, %asi 6223 wr %g0, ASI_AIUS, %asi
6192 LDPTR [%o3 + %lo(CPCB)], %o3 6224 LDPTR [%o3 + %lo(CPCB)], %o3
6193 set Lcopyfault, %o4 6225 set Lcopyfault, %o4
6194! mov %o7, %g7 ! save return address 6226! mov %o7, %g7 ! save return address
6195 membar #Sync 6227 membar #Sync
6196 STPTR %o4, [%o3 + PCB_ONFAULT] 6228 STPTR %o4, [%o3 + PCB_ONFAULT]
6197 cmp %o2, BCOPY_SMALL 6229 cmp %o2, BCOPY_SMALL
6198Lcopyin_start: 6230Lcopyin_start:
6199 bge,a Lcopyin_fancy ! if >= this many, go be fancy. 6231 bge,a Lcopyin_fancy ! if >= this many, go be fancy.
6200 btst 7, %o0 ! (part of being fancy) 6232 btst 7, %o0 ! (part of being fancy)
6201 6233
6202 /* 6234 /*
6203 * Not much to copy, just do it a byte at a time. 6235 * Not much to copy, just do it a byte at a time.
6204 */ 6236 */
6205 deccc %o2 ! while (--len >= 0) 6237 deccc %o2 ! while (--len >= 0)
6206 bl 1f 6238 bl 1f
62070: 62390:
6208 inc %o0 6240 inc %o0
6209 ldsba [%o0 - 1] %asi, %o4! *dst++ = (++src)[-1]; 6241 ldsba [%o0 - 1] %asi, %o4! *dst++ = (++src)[-1];
6210 stb %o4, [%o1] 6242 stb %o4, [%o1]
6211 deccc %o2 6243 deccc %o2
6212 bge 0b 6244 bge 0b
6213 inc %o1 6245 inc %o1
62141: 62461:
6215 ba Lcopyin_done 6247 ba Lcopyin_done
6216 clr %o0 6248 clr %o0
6217 NOTREACHED 6249 NOTREACHED
6218 6250
6219 /* 6251 /*
6220 * Plenty of data to copy, so try to do it optimally. 6252 * Plenty of data to copy, so try to do it optimally.
6221 */ 6253 */
6222Lcopyin_fancy: 6254Lcopyin_fancy:
6223 ! check for common case first: everything lines up. 6255 ! check for common case first: everything lines up.
6224! btst 7, %o0 ! done already 6256! btst 7, %o0 ! done already
6225 bne 1f 6257 bne 1f
6226 EMPTY 6258 EMPTY
6227 btst 7, %o1 6259 btst 7, %o1
6228 be,a Lcopyin_doubles 6260 be,a Lcopyin_doubles
6229 dec 8, %o2 ! if all lined up, len -= 8, goto copyin_doubes 6261 dec 8, %o2 ! if all lined up, len -= 8, goto copyin_doubes
6230 6262
6231 ! If the low bits match, we can make these line up. 6263 ! If the low bits match, we can make these line up.
62321: 62641:
6233 xor %o0, %o1, %o3 ! t = src ^ dst; 6265 xor %o0, %o1, %o3 ! t = src ^ dst;
6234 btst 1, %o3 ! if (t & 1) { 6266 btst 1, %o3 ! if (t & 1) {
6235 be,a 1f 6267 be,a 1f
6236 btst 1, %o0 ! [delay slot: if (src & 1)] 6268 btst 1, %o0 ! [delay slot: if (src & 1)]
6237 6269
6238 ! low bits do not match, must copy by bytes. 6270 ! low bits do not match, must copy by bytes.
62390: 62710:
6240 ldsba [%o0] %asi, %o4 ! do { 6272 ldsba [%o0] %asi, %o4 ! do {
6241 inc %o0 ! (++dst)[-1] = *src++; 6273 inc %o0 ! (++dst)[-1] = *src++;
6242 inc %o1 6274 inc %o1
6243 deccc %o2 6275 deccc %o2
6244 bnz 0b ! } while (--len != 0); 6276 bnz 0b ! } while (--len != 0);
6245 stb %o4, [%o1 - 1] 6277 stb %o4, [%o1 - 1]
6246 ba Lcopyin_done 6278 ba Lcopyin_done
6247 clr %o0 6279 clr %o0
6248 NOTREACHED 6280 NOTREACHED
6249 6281
6250 ! lowest bit matches, so we can copy by words, if nothing else 6282 ! lowest bit matches, so we can copy by words, if nothing else
62511: 62831:
6252 be,a 1f ! if (src & 1) { 6284 be,a 1f ! if (src & 1) {
6253 btst 2, %o3 ! [delay slot: if (t & 2)] 6285 btst 2, %o3 ! [delay slot: if (t & 2)]
6254 6286
6255 ! although low bits match, both are 1: must copy 1 byte to align 6287 ! although low bits match, both are 1: must copy 1 byte to align
6256 ldsba [%o0] %asi, %o4 ! *dst++ = *src++; 6288 ldsba [%o0] %asi, %o4 ! *dst++ = *src++;
6257 stb %o4, [%o1] 6289 stb %o4, [%o1]
6258 inc %o0 6290 inc %o0
6259 inc %o1 6291 inc %o1
6260 dec %o2 ! len--; 6292 dec %o2 ! len--;
6261 btst 2, %o3 ! } [if (t & 2)] 6293 btst 2, %o3 ! } [if (t & 2)]
62621: 62941:
6263 be,a 1f ! if (t & 2) { 6295 be,a 1f ! if (t & 2) {
6264 btst 2, %o0 ! [delay slot: if (src & 2)] 6296 btst 2, %o0 ! [delay slot: if (src & 2)]
6265 dec 2, %o2 ! len -= 2; 6297 dec 2, %o2 ! len -= 2;
62660: 62980:
6267 ldsha [%o0] %asi, %o4 ! do { 6299 ldsha [%o0] %asi, %o4 ! do {
6268 sth %o4, [%o1] ! *(short *)dst = *(short *)src; 6300 sth %o4, [%o1] ! *(short *)dst = *(short *)src;
6269 inc 2, %o0 ! dst += 2, src += 2; 6301 inc 2, %o0 ! dst += 2, src += 2;
6270 deccc 2, %o2 ! } while ((len -= 2) >= 0); 6302 deccc 2, %o2 ! } while ((len -= 2) >= 0);
6271 bge 0b 6303 bge 0b
6272 inc 2, %o1 6304 inc 2, %o1
6273 b Lcopyin_mopb ! goto mop_up_byte; 6305 b Lcopyin_mopb ! goto mop_up_byte;
6274 btst 1, %o2 ! } [delay slot: if (len & 1)] 6306 btst 1, %o2 ! } [delay slot: if (len & 1)]
6275 NOTREACHED 6307 NOTREACHED
6276 6308
6277 ! low two bits match, so we can copy by longwords 6309 ! low two bits match, so we can copy by longwords
62781: 63101:
6279 be,a 1f ! if (src & 2) { 6311 be,a 1f ! if (src & 2) {
6280 btst 4, %o3 ! [delay slot: if (t & 4)] 6312 btst 4, %o3 ! [delay slot: if (t & 4)]
6281 6313
6282 ! although low 2 bits match, they are 10: must copy one short to align 6314 ! although low 2 bits match, they are 10: must copy one short to align
6283 ldsha [%o0] %asi, %o4 ! (*short *)dst = *(short *)src; 6315 ldsha [%o0] %asi, %o4 ! (*short *)dst = *(short *)src;
6284 sth %o4, [%o1] 6316 sth %o4, [%o1]
6285 inc 2, %o0 ! dst += 2; 6317 inc 2, %o0 ! dst += 2;
6286 inc 2, %o1 ! src += 2; 6318 inc 2, %o1 ! src += 2;
6287 dec 2, %o2 ! len -= 2; 6319 dec 2, %o2 ! len -= 2;
6288 btst 4, %o3 ! } [if (t & 4)] 6320 btst 4, %o3 ! } [if (t & 4)]
62891: 63211:
6290 be,a 1f ! if (t & 4) { 6322 be,a 1f ! if (t & 4) {
6291 btst 4, %o0 ! [delay slot: if (src & 4)] 6323 btst 4, %o0 ! [delay slot: if (src & 4)]
6292 dec 4, %o2 ! len -= 4; 6324 dec 4, %o2 ! len -= 4;
62930: 63250:
6294 lduwa [%o0] %asi, %o4 ! do { 6326 lduwa [%o0] %asi, %o4 ! do {
6295 st %o4, [%o1] ! *(int *)dst = *(int *)src; 6327 st %o4, [%o1] ! *(int *)dst = *(int *)src;
6296 inc 4, %o0 ! dst += 4, src += 4; 6328 inc 4, %o0 ! dst += 4, src += 4;
6297 deccc 4, %o2 ! } while ((len -= 4) >= 0); 6329 deccc 4, %o2 ! } while ((len -= 4) >= 0);
6298 bge 0b 6330 bge 0b
6299 inc 4, %o1 6331 inc 4, %o1
6300 b Lcopyin_mopw ! goto mop_up_word_and_byte; 6332 b Lcopyin_mopw ! goto mop_up_word_and_byte;
6301 btst 2, %o2 ! } [delay slot: if (len & 2)] 6333 btst 2, %o2 ! } [delay slot: if (len & 2)]
6302 NOTREACHED 6334 NOTREACHED
6303 6335
6304 ! low three bits match, so we can copy by doublewords 6336 ! low three bits match, so we can copy by doublewords
63051: 63371:
6306 be 1f ! if (src & 4) { 6338 be 1f ! if (src & 4) {
6307 dec 8, %o2 ! [delay slot: len -= 8] 6339 dec 8, %o2 ! [delay slot: len -= 8]
6308 lduwa [%o0] %asi, %o4 ! *(int *)dst = *(int *)src; 6340 lduwa [%o0] %asi, %o4 ! *(int *)dst = *(int *)src;
6309 st %o4, [%o1] 6341 st %o4, [%o1]
6310 inc 4, %o0 ! dst += 4, src += 4, len -= 4; 6342 inc 4, %o0 ! dst += 4, src += 4, len -= 4;
6311 inc 4, %o1 6343 inc 4, %o1
6312 dec 4, %o2 ! } 6344 dec 4, %o2 ! }
63131: 63451:
6314Lcopyin_doubles: 6346Lcopyin_doubles:
6315 ldxa [%o0] %asi, %g1 ! do { 6347 ldxa [%o0] %asi, %g1 ! do {
6316 stx %g1, [%o1] ! *(double *)dst = *(double *)src; 6348 stx %g1, [%o1] ! *(double *)dst = *(double *)src;
6317 inc 8, %o0 ! dst += 8, src += 8; 6349 inc 8, %o0 ! dst += 8, src += 8;
6318 deccc 8, %o2 ! } while ((len -= 8) >= 0); 6350 deccc 8, %o2 ! } while ((len -= 8) >= 0);
6319 bge Lcopyin_doubles 6351 bge Lcopyin_doubles
6320 inc 8, %o1 6352 inc 8, %o1
6321 6353
6322 ! check for a usual case again (save work) 6354 ! check for a usual case again (save work)
6323 btst 7, %o2 ! if ((len & 7) == 0) 6355 btst 7, %o2 ! if ((len & 7) == 0)
6324 be Lcopyin_done ! goto copyin_done; 6356 be Lcopyin_done ! goto copyin_done;
6325 6357
6326 btst 4, %o2 ! if ((len & 4)) == 0) 6358 btst 4, %o2 ! if ((len & 4)) == 0)
6327 be,a Lcopyin_mopw ! goto mop_up_word_and_byte; 6359 be,a Lcopyin_mopw ! goto mop_up_word_and_byte;
6328 btst 2, %o2 ! [delay slot: if (len & 2)] 6360 btst 2, %o2 ! [delay slot: if (len & 2)]
6329 lduwa [%o0] %asi, %o4 ! *(int *)dst = *(int *)src; 6361 lduwa [%o0] %asi, %o4 ! *(int *)dst = *(int *)src;
6330 st %o4, [%o1] 6362 st %o4, [%o1]
6331 inc 4, %o0 ! dst += 4; 6363 inc 4, %o0 ! dst += 4;
6332 inc 4, %o1 ! src += 4; 6364 inc 4, %o1 ! src += 4;
6333 btst 2, %o2 ! } [if (len & 2)] 6365 btst 2, %o2 ! } [if (len & 2)]
6334 6366
63351: 63671:
6336 ! mop up trailing word (if present) and byte (if present). 6368 ! mop up trailing word (if present) and byte (if present).
6337Lcopyin_mopw: 6369Lcopyin_mopw:
6338 be Lcopyin_mopb ! no word, go mop up byte 6370 be Lcopyin_mopb ! no word, go mop up byte
6339 btst 1, %o2 ! [delay slot: if (len & 1)] 6371 btst 1, %o2 ! [delay slot: if (len & 1)]
6340 ldsha [%o0] %asi, %o4 ! *(short *)dst = *(short *)src; 6372 ldsha [%o0] %asi, %o4 ! *(short *)dst = *(short *)src;
6341 be Lcopyin_done ! if ((len & 1) == 0) goto done; 6373 be Lcopyin_done ! if ((len & 1) == 0) goto done;
6342 sth %o4, [%o1] 6374 sth %o4, [%o1]
6343 ldsba [%o0 + 2] %asi, %o4 ! dst[2] = src[2]; 6375 ldsba [%o0 + 2] %asi, %o4 ! dst[2] = src[2];
6344 stb %o4, [%o1 + 2] 6376 stb %o4, [%o1 + 2]
6345 ba Lcopyin_done 6377 ba Lcopyin_done
6346 clr %o0 6378 clr %o0
6347 NOTREACHED 6379 NOTREACHED
6348 6380
6349 ! mop up trailing byte (if present). 6381 ! mop up trailing byte (if present).
6350Lcopyin_mopb: 6382Lcopyin_mopb:
6351 be,a Lcopyin_done 6383 be,a Lcopyin_done
6352 nop 6384 nop
6353 ldsba [%o0] %asi, %o4 6385 ldsba [%o0] %asi, %o4
6354 stb %o4, [%o1] 6386 stb %o4, [%o1]
6355 6387
6356Lcopyin_done: 6388Lcopyin_done:
6357 sethi %hi(CPCB), %o3 6389 sethi %hi(CPCB), %o3
6358! stb %o4,[%o1] ! Store last byte -- should not be needed 6390! stb %o4,[%o1] ! Store last byte -- should not be needed
6359 LDPTR [%o3 + %lo(CPCB)], %o3 6391 LDPTR [%o3 + %lo(CPCB)], %o3
6360 membar #Sync 6392 membar #Sync
6361 STPTR %g0, [%o3 + PCB_ONFAULT] 6393 STPTR %g0, [%o3 + PCB_ONFAULT]
6362 wr %g0, ASI_PRIMARY_NOFAULT, %asi ! Restore ASI 6394 wr %g0, ASI_PRIMARY_NOFAULT, %asi ! Restore ASI
6363 retl 6395 retl
6364 clr %o0 ! return 0 6396 clr %o0 ! return 0
6365 6397
6366/* 6398/*
6367 * copyout(src, dst, len) 6399 * copyout(src, dst, len)
6368 * 6400 *
6369 * Copy specified amount of data from kernel to user space. 6401 * Copy specified amount of data from kernel to user space.
6370 * Just like copyin, except that the `dst' addresses are user space 6402 * Just like copyin, except that the `dst' addresses are user space
6371 * rather than the `src' addresses. 6403 * rather than the `src' addresses.
6372 * 6404 *
6373 * This is a modified version of memcpy that uses ASI_AIUS. When 6405 * This is a modified version of memcpy that uses ASI_AIUS. When
6374 * memcpy is optimized to use block copy ASIs, this should be also. 6406 * memcpy is optimized to use block copy ASIs, this should be also.
6375 */ 6407 */
6376 /* 6408 /*
6377 * This needs to be reimplemented to really do the copy. 6409 * This needs to be reimplemented to really do the copy.
6378 */ 6410 */
6379ENTRY(copyout) 6411ENTRY(copyout)
6380 /* 6412 /*
6381 * ******NOTE****** this depends on memcpy() not using %g7 6413 * ******NOTE****** this depends on memcpy() not using %g7
6382 */ 6414 */
6383#ifdef NOTDEF_DEBUG 6415#ifdef NOTDEF_DEBUG
6384 save %sp, -CC64FSZ, %sp 6416 save %sp, -CC64FSZ, %sp
6385 set 1f, %o0 6417 set 1f, %o0
6386 mov %i0, %o1 6418 mov %i0, %o1
6387 set CTX_SECONDARY, %o4 6419 set CTX_SECONDARY, %o4
6388 mov %i1, %o2 6420 mov %i1, %o2
6389 ldxa [%o4] ASI_DMMU, %o4 6421 ldxa [%o4] ASI_DMMU, %o4
6390 call printf 6422 call printf
6391 mov %i2, %o3 6423 mov %i2, %o3
6392 restore 6424 restore
6393 .data 6425 .data
63941: .asciz "copyout: src=%x dest=%x len=%x ctx=%d\n" 64261: .asciz "copyout: src=%x dest=%x len=%x ctx=%d\n"
6395 _ALIGN 6427 _ALIGN
6396 .text 6428 .text
6397#endif 6429#endif
6398Ldocopy: 6430Ldocopy:
6399 sethi %hi(CPCB), %o3 6431 sethi %hi(CPCB), %o3
6400 wr %g0, ASI_AIUS, %asi 6432 wr %g0, ASI_AIUS, %asi
6401 LDPTR [%o3 + %lo(CPCB)], %o3 6433 LDPTR [%o3 + %lo(CPCB)], %o3
6402 set Lcopyfault, %o4 6434 set Lcopyfault, %o4
6403! mov %o7, %g7 ! save return address 6435! mov %o7, %g7 ! save return address
6404 membar #Sync 6436 membar #Sync
6405 STPTR %o4, [%o3 + PCB_ONFAULT] 6437 STPTR %o4, [%o3 + PCB_ONFAULT]
6406 cmp %o2, BCOPY_SMALL 6438 cmp %o2, BCOPY_SMALL
6407Lcopyout_start: 6439Lcopyout_start:
6408 membar #StoreStore 6440 membar #StoreStore
6409 bge,a Lcopyout_fancy ! if >= this many, go be fancy. 6441 bge,a Lcopyout_fancy ! if >= this many, go be fancy.
6410 btst 7, %o0 ! (part of being fancy) 6442 btst 7, %o0 ! (part of being fancy)
6411 6443
6412 /* 6444 /*
6413 * Not much to copy, just do it a byte at a time. 6445 * Not much to copy, just do it a byte at a time.
6414 */ 6446 */
6415 deccc %o2 ! while (--len >= 0) 6447 deccc %o2 ! while (--len >= 0)
6416 bl 1f 6448 bl 1f
6417 EMPTY 6449 EMPTY
64180: 64500:
6419 inc %o0 6451 inc %o0
6420 ldsb [%o0 - 1], %o4! (++dst)[-1] = *src++; 6452 ldsb [%o0 - 1], %o4! (++dst)[-1] = *src++;
6421 stba %o4, [%o1] %asi 6453 stba %o4, [%o1] %asi
6422 deccc %o2 6454 deccc %o2
6423 bge 0b 6455 bge 0b
6424 inc %o1 6456 inc %o1
64251: 64571:
6426 ba Lcopyout_done 6458 ba Lcopyout_done
6427 clr %o0 6459 clr %o0
6428 NOTREACHED 6460 NOTREACHED
6429 6461
6430 /* 6462 /*
6431 * Plenty of data to copy, so try to do it optimally. 6463 * Plenty of data to copy, so try to do it optimally.
6432 */ 6464 */
6433Lcopyout_fancy: 6465Lcopyout_fancy:
6434 ! check for common case first: everything lines up. 6466 ! check for common case first: everything lines up.
6435! btst 7, %o0 ! done already 6467! btst 7, %o0 ! done already
6436 bne 1f 6468 bne 1f
6437 EMPTY 6469 EMPTY
6438 btst 7, %o1 6470 btst 7, %o1
6439 be,a Lcopyout_doubles 6471 be,a Lcopyout_doubles