mips: Membar audit. This change should be safe because it doesn't remove or weaken any memory barriers, but does add, clarify, or strengthen barriers. Goals: - Make sure mutex_enter/exit and mutex_spin_enter/exit have acquire/release semantics. - New macros make maintenance easier and purpose clearer: . SYNC_ACQ is for load-before-load/store barrier, and BDSYNC_ACQ for a branch delay slot -- currently defined as plain sync for MP and nothing, or nop, for UP; thus it is no weaker than SYNC and BDSYNC as currently defined, which is syncw on Octeon, plain sync on non-Octeon MP, and nothing/nop on UP. It is not clear to me whether load-then-syncw or ll/sc-then-syncw or even bare load provides load-acquire semantics on Octeon -- if no, this will fix bugs; if yes (like it is on SPARC PSO), we can relax SYNC_ACQ to be syncw or nothing later. . SYNC_REL is for load/store-before-store barrier -- currently defined as plain sync for MP and nothing for UP. It is not clear to me whether syncw-then-store is enough for store-release on Octeon -- if no, we can leave this as is; if yes, we can relax SYNC_REL to be syncw on Octeon. . SYNC_PLUNGER is there to flush clogged Cavium store buffers, and BDSYNC_PLUNGER for a branch delay slot -- syncw on Octeon, nothing or nop on non-Octeon. => This is not necessary (or, as far as I'm aware, sufficient) for acquire semantics -- it serves only to flush store buffers where stores might otherwise linger for hundreds of thousands of cycles, which would, e.g., cause spin locks to be held for unreasonably long durations. Newerish revisions of the MIPS ISA also have finer-grained sync variants that could be plopped in here. Mechanism: Insert these barriers in the right places, replacing only those where the definition is currently equivalent, so this change is safe. - Replace #ifdef _MIPS_ARCH_OCTEONP / syncw / #endif at the end of atomic_cas_* by SYNC_PLUNGER, which is `sync 4' (a.k.a. syncw) if __OCTEON__ and empty otherwise. => From what I can tell, __OCTEON__ is defined in at least as many contexts as _MIPS_ARCH_OCTEONP -- i.e., there are some Octeons with no _MIPS_ARCH_OCTEONP, but I don't know if any of them are relevant to us or ever saw the light of day outside Cavium; we seem to buid with `-march=octeonp' so this is unlikely to make a difference. If it turns out that we do care, well, now there's a central place to make the distinction for sync instructions. - Replace post-ll/sc SYNC by SYNC_ACQ in _atomic_cas_*, which are internal kernel versions used in sys/arch/mips/include/lock.h where it assumes they have load-acquire semantics. Should move this to lock.h later, since we _don't_ define __HAVE_ATOMIC_AS_MEMBAR on MIPS and so the extra barrier might be costly. - Insert SYNC_REL before ll/sc, and replace post-ll/sc SYNC by SYNC_ACQ, in _ucas_*, which is used without any barriers in futex code and doesn't mention barriers in the man page so I have to assume it is required to be a release/acquire barrier. - Change BDSYNC to BDSYNC_ACQ in mutex_enter and mutex_spin_enter. This is necessary to provide load-acquire semantics -- unclear if it was provided already by syncw on Octeon, but it seems more likely that either (a) no sync or syncw is needed at all, or (b) syncw is not enough and sync is needed, since syncw is only a store-before-store ordering barrier. - Insert SYNC_REL before ll/sc in mutex_exit and mutex_spin_exit. This is currently redundant with the SYNC already there, but SYNC_REL more clearly identifies the necessary semantics in case we want to define it differently on different systems, and having a sync in the middle of an ll/sc is a bit weird and possibly not a good idea, so I intend to (carefully) remove the redundant SYNC in a later change. - Change BDSYNC to BDSYNC_PLUNGER at the end of mutex_exit. This has no semantic change right now -- it's syncw on Octeon, sync on non-Octeon MP, nop on UP -- but we can relax it later to nop on non-Cavium MP. - Leave LLSCSYNC in for now -- it is apparently there for a Cavium erratum, but I'm not sure what the erratum is, exactly, and I have no reference for it. I suspect these can be safely removed, but we might have to double up some other syncw instructions -- Linux uses it only in store-release sequences, not at the head of every ll/sc.diff -r1.8 -r1.9 src/common/lib/libc/arch/mips/atomic/atomic_cas.S
(riastradh)
--- src/common/lib/libc/arch/mips/atomic/atomic_cas.S 2020/08/06 10:00:21 1.8
+++ src/common/lib/libc/arch/mips/atomic/atomic_cas.S 2022/02/27 19:21:53 1.9
@@ -1,102 +1,98 @@ | @@ -1,102 +1,98 @@ | |||
1 | /* $NetBSD: atomic_cas.S,v 1.8 2020/08/06 10:00:21 skrll Exp $ */ | 1 | /* $NetBSD: atomic_cas.S,v 1.9 2022/02/27 19:21:53 riastradh Exp $ */ | |
2 | 2 | |||
3 | /*- | 3 | /*- | |
4 | * Copyright (c) 2008 The NetBSD Foundation, Inc. | 4 | * Copyright (c) 2008 The NetBSD Foundation, Inc. | |
5 | * All rights reserved. | 5 | * All rights reserved. | |
6 | * | 6 | * | |
7 | * Redistribution and use in source and binary forms, with or without | 7 | * Redistribution and use in source and binary forms, with or without | |
8 | * modification, are permitted provided that the following conditions | 8 | * modification, are permitted provided that the following conditions | |
9 | * are met: | 9 | * are met: | |
10 | * 1. Redistributions of source code must retain the above copyright | 10 | * 1. Redistributions of source code must retain the above copyright | |
11 | * notice, this list of conditions and the following disclaimer. | 11 | * notice, this list of conditions and the following disclaimer. | |
12 | * 2. Redistributions in binary form must reproduce the above copyright | 12 | * 2. Redistributions in binary form must reproduce the above copyright | |
13 | * notice, this list of conditions and the following disclaimer in the | 13 | * notice, this list of conditions and the following disclaimer in the | |
14 | * documentation and/or other materials provided with the distribution. | 14 | * documentation and/or other materials provided with the distribution. | |
15 | * | 15 | * | |
16 | * THIS SOFTWARE IS PROVIDED BY THE NETBSD FOUNDATION, INC. AND CONTRIBUTORS | 16 | * THIS SOFTWARE IS PROVIDED BY THE NETBSD FOUNDATION, INC. AND CONTRIBUTORS | |
17 | * ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED | 17 | * ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED | |
18 | * TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR | 18 | * TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR | |
19 | * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS | 19 | * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS | |
20 | * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR | 20 | * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR | |
21 | * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF | 21 | * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF | |
22 | * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS | 22 | * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS | |
23 | * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN | 23 | * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN | |
24 | * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) | 24 | * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) | |
25 | * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE | 25 | * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE | |
26 | * POSSIBILITY OF SUCH DAMAGE. | 26 | * POSSIBILITY OF SUCH DAMAGE. | |
27 | */ | 27 | */ | |
28 | 28 | |||
29 | #include <machine/asm.h> | 29 | #include <machine/asm.h> | |
30 | #include "atomic_op_asm.h" | 30 | #include "atomic_op_asm.h" | |
31 | 31 | |||
32 | RCSID("$NetBSD: atomic_cas.S,v 1.8 2020/08/06 10:00:21 skrll Exp $") | 32 | RCSID("$NetBSD: atomic_cas.S,v 1.9 2022/02/27 19:21:53 riastradh Exp $") | |
33 | 33 | |||
34 | .text | 34 | .text | |
35 | .set noat | 35 | .set noat | |
36 | .set noreorder | 36 | .set noreorder | |
37 | .set nomacro | 37 | .set nomacro | |
38 | 38 | |||
39 | LEAF(_atomic_cas_32) | 39 | LEAF(_atomic_cas_32) | |
40 | LLSCSYNC | 40 | LLSCSYNC | |
41 | 1: INT_LL v0, 0(a0) | 41 | 1: INT_LL v0, 0(a0) | |
42 | nop | 42 | nop | |
43 | bne v0, a1, 2f | 43 | bne v0, a1, 2f | |
44 | nop | 44 | nop | |
45 | move t0, a2 | 45 | move t0, a2 | |
46 | INT_SC t0, 0(a0) | 46 | INT_SC t0, 0(a0) | |
47 | beq t0, zero, 1b | 47 | beq t0, zero, 1b | |
48 | nop | 48 | nop | |
49 | move v0, a1 | 49 | move v0, a1 | |
50 | #ifdef _MIPS_ARCH_OCTEONP | 50 | SYNC_PLUNGER | |
51 | syncw | |||
52 | #endif | |||
53 | 2: | 51 | 2: | |
54 | j ra | 52 | j ra | |
55 | nop | 53 | nop | |
56 | END(_atomic_cas_32) | 54 | END(_atomic_cas_32) | |
57 | ATOMIC_OP_ALIAS(atomic_cas_32, _atomic_cas_32) | 55 | ATOMIC_OP_ALIAS(atomic_cas_32, _atomic_cas_32) | |
58 | ATOMIC_OP_ALIAS(atomic_cas_32_ni, _atomic_cas_32) | 56 | ATOMIC_OP_ALIAS(atomic_cas_32_ni, _atomic_cas_32) | |
59 | 57 | |||
60 | #if !defined(__mips_o32) | 58 | #if !defined(__mips_o32) | |
61 | LEAF(_atomic_cas_64) | 59 | LEAF(_atomic_cas_64) | |
62 | LLSCSYNC | 60 | LLSCSYNC | |
63 | 1: REG_LL v0, 0(a0) | 61 | 1: REG_LL v0, 0(a0) | |
64 | nop | 62 | nop | |
65 | bne v0, a1, 2f | 63 | bne v0, a1, 2f | |
66 | nop | 64 | nop | |
67 | move t0, a2 | 65 | move t0, a2 | |
68 | REG_SC t0, 0(a0) | 66 | REG_SC t0, 0(a0) | |
69 | beq t0, zero, 1b | 67 | beq t0, zero, 1b | |
70 | nop | 68 | nop | |
71 | move v0, a1 | 69 | move v0, a1 | |
72 | #ifdef _MIPS_ARCH_OCTEONP | 70 | SYNC_PLUNGER | |
73 | syncw | |||
74 | #endif | |||
75 | 2: | 71 | 2: | |
76 | j ra | 72 | j ra | |
77 | nop | 73 | nop | |
78 | END(_atomic_cas_64) | 74 | END(_atomic_cas_64) | |
79 | ATOMIC_OP_ALIAS(atomic_cas_64, _atomic_cas_64) | 75 | ATOMIC_OP_ALIAS(atomic_cas_64, _atomic_cas_64) | |
80 | ATOMIC_OP_ALIAS(atomic_cas_64_ni, _atomic_cas_64) | 76 | ATOMIC_OP_ALIAS(atomic_cas_64_ni, _atomic_cas_64) | |
81 | #endif | 77 | #endif | |
82 | 78 | |||
83 | #ifdef _LP64 | 79 | #ifdef _LP64 | |
84 | STRONG_ALIAS(_atomic_cas_ptr, _atomic_cas_64) | 80 | STRONG_ALIAS(_atomic_cas_ptr, _atomic_cas_64) | |
85 | STRONG_ALIAS(_atomic_cas_ptr_ni, _atomic_cas_64) | 81 | STRONG_ALIAS(_atomic_cas_ptr_ni, _atomic_cas_64) | |
86 | STRONG_ALIAS(_atomic_cas_ulong, _atomic_cas_64) | 82 | STRONG_ALIAS(_atomic_cas_ulong, _atomic_cas_64) | |
87 | STRONG_ALIAS(_atomic_cas_ulong_ni, _atomic_cas_64) | 83 | STRONG_ALIAS(_atomic_cas_ulong_ni, _atomic_cas_64) | |
88 | #else | 84 | #else | |
89 | STRONG_ALIAS(_atomic_cas_ptr, _atomic_cas_32) | 85 | STRONG_ALIAS(_atomic_cas_ptr, _atomic_cas_32) | |
90 | STRONG_ALIAS(_atomic_cas_ptr_ni, _atomic_cas_32) | 86 | STRONG_ALIAS(_atomic_cas_ptr_ni, _atomic_cas_32) | |
91 | STRONG_ALIAS(_atomic_cas_ulong, _atomic_cas_32) | 87 | STRONG_ALIAS(_atomic_cas_ulong, _atomic_cas_32) | |
92 | STRONG_ALIAS(_atomic_cas_ulong_ni, _atomic_cas_32) | 88 | STRONG_ALIAS(_atomic_cas_ulong_ni, _atomic_cas_32) | |
93 | #endif | 89 | #endif | |
94 | STRONG_ALIAS(_atomic_cas_uint, _atomic_cas_32) | 90 | STRONG_ALIAS(_atomic_cas_uint, _atomic_cas_32) | |
95 | STRONG_ALIAS(_atomic_cas_uint_ni, _atomic_cas_32) | 91 | STRONG_ALIAS(_atomic_cas_uint_ni, _atomic_cas_32) | |
96 | 92 | |||
97 | ATOMIC_OP_ALIAS(atomic_cas_ptr, _atomic_cas_ptr) | 93 | ATOMIC_OP_ALIAS(atomic_cas_ptr, _atomic_cas_ptr) | |
98 | ATOMIC_OP_ALIAS(atomic_cas_ptr_ni, _atomic_cas_ptr_ni) | 94 | ATOMIC_OP_ALIAS(atomic_cas_ptr_ni, _atomic_cas_ptr_ni) | |
99 | ATOMIC_OP_ALIAS(atomic_cas_uint, _atomic_cas_uint) | 95 | ATOMIC_OP_ALIAS(atomic_cas_uint, _atomic_cas_uint) | |
100 | ATOMIC_OP_ALIAS(atomic_cas_uint_ni, _atomic_cas_uint_ni) | 96 | ATOMIC_OP_ALIAS(atomic_cas_uint_ni, _atomic_cas_uint_ni) | |
101 | ATOMIC_OP_ALIAS(atomic_cas_ulong, _atomic_cas_ulong) | 97 | ATOMIC_OP_ALIAS(atomic_cas_ulong, _atomic_cas_ulong) | |
102 | ATOMIC_OP_ALIAS(atomic_cas_ulong_ni, _atomic_cas_ulong_ni) | 98 | ATOMIC_OP_ALIAS(atomic_cas_ulong_ni, _atomic_cas_ulong_ni) |
--- src/common/lib/libc/arch/mips/atomic/atomic_op_asm.h 2020/08/01 09:26:49 1.4
+++ src/common/lib/libc/arch/mips/atomic/atomic_op_asm.h 2022/02/27 19:21:53 1.5
@@ -1,53 +1,47 @@ | @@ -1,53 +1,47 @@ | |||
1 | /* $NetBSD: atomic_op_asm.h,v 1.4 2020/08/01 09:26:49 skrll Exp $ */ | 1 | /* $NetBSD: atomic_op_asm.h,v 1.5 2022/02/27 19:21:53 riastradh Exp $ */ | |
2 | 2 | |||
3 | /*- | 3 | /*- | |
4 | * Copyright (c) 2007 The NetBSD Foundation, Inc. | 4 | * Copyright (c) 2007 The NetBSD Foundation, Inc. | |
5 | * All rights reserved. | 5 | * All rights reserved. | |
6 | * | 6 | * | |
7 | * This code is derived from software contributed to The NetBSD Foundation | 7 | * This code is derived from software contributed to The NetBSD Foundation | |
8 | * by Jason R. Thorpe. | 8 | * by Jason R. Thorpe. | |
9 | * | 9 | * | |
10 | * Redistribution and use in source and binary forms, with or without | 10 | * Redistribution and use in source and binary forms, with or without | |
11 | * modification, are permitted provided that the following conditions | 11 | * modification, are permitted provided that the following conditions | |
12 | * are met: | 12 | * are met: | |
13 | * 1. Redistributions of source code must retain the above copyright | 13 | * 1. Redistributions of source code must retain the above copyright | |
14 | * notice, this list of conditions and the following disclaimer. | 14 | * notice, this list of conditions and the following disclaimer. | |
15 | * 2. Redistributions in binary form must reproduce the above copyright | 15 | * 2. Redistributions in binary form must reproduce the above copyright | |
16 | * notice, this list of conditions and the following disclaimer in the | 16 | * notice, this list of conditions and the following disclaimer in the | |
17 | * documentation and/or other materials provided with the distribution. | 17 | * documentation and/or other materials provided with the distribution. | |
18 | * | 18 | * | |
19 | * THIS SOFTWARE IS PROVIDED BY THE NETBSD FOUNDATION, INC. AND CONTRIBUTORS | 19 | * THIS SOFTWARE IS PROVIDED BY THE NETBSD FOUNDATION, INC. AND CONTRIBUTORS | |
20 | * ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED | 20 | * ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED | |
21 | * TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR | 21 | * TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR | |
22 | * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS | 22 | * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS | |
23 | * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR | 23 | * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR | |
24 | * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF | 24 | * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF | |
25 | * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS | 25 | * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS | |
26 | * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN | 26 | * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN | |
27 | * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) | 27 | * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) | |
28 | * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE | 28 | * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE | |
29 | * POSSIBILITY OF SUCH DAMAGE. | 29 | * POSSIBILITY OF SUCH DAMAGE. | |
30 | */ | 30 | */ | |
31 | 31 | |||
32 | #ifndef _ATOMIC_OP_ASM_H_ | 32 | #ifndef _ATOMIC_OP_ASM_H_ | |
33 | #define _ATOMIC_OP_ASM_H_ | 33 | #define _ATOMIC_OP_ASM_H_ | |
34 | 34 | |||
35 | #include <machine/asm.h> | 35 | #include <machine/asm.h> | |
36 | 36 | |||
37 | #if defined(_KERNEL) | 37 | #if defined(_KERNEL) | |
38 | 38 | |||
39 | #define ATOMIC_OP_ALIAS(a,s) STRONG_ALIAS(a,s) | 39 | #define ATOMIC_OP_ALIAS(a,s) STRONG_ALIAS(a,s) | |
40 | 40 | |||
41 | #else /* _KERNEL */ | 41 | #else /* _KERNEL */ | |
42 | 42 | |||
43 | #define ATOMIC_OP_ALIAS(a,s) WEAK_ALIAS(a,s) | 43 | #define ATOMIC_OP_ALIAS(a,s) WEAK_ALIAS(a,s) | |
44 | 44 | |||
45 | #endif /* _KERNEL */ | 45 | #endif /* _KERNEL */ | |
46 | 46 | |||
47 | #ifdef __OCTEON__ | |||
48 | #define SYNCW syncw | |||
49 | #else | |||
50 | #define SYNCW nop | |||
51 | #endif | |||
52 | ||||
53 | #endif /* _ATOMIC_OP_ASM_H_ */ | 47 | #endif /* _ATOMIC_OP_ASM_H_ */ |
--- src/common/lib/libc/arch/mips/atomic/atomic_swap.S 2020/08/06 10:00:21 1.7
+++ src/common/lib/libc/arch/mips/atomic/atomic_swap.S 2022/02/27 19:21:53 1.8
@@ -1,88 +1,88 @@ | @@ -1,88 +1,88 @@ | |||
1 | /* $NetBSD: atomic_swap.S,v 1.7 2020/08/06 10:00:21 skrll Exp $ */ | 1 | /* $NetBSD: atomic_swap.S,v 1.8 2022/02/27 19:21:53 riastradh Exp $ */ | |
2 | 2 | |||
3 | /*- | 3 | /*- | |
4 | * Copyright (c) 2008 The NetBSD Foundation, Inc. | 4 | * Copyright (c) 2008 The NetBSD Foundation, Inc. | |
5 | * All rights reserved. | 5 | * All rights reserved. | |
6 | * | 6 | * | |
7 | * Redistribution and use in source and binary forms, with or without | 7 | * Redistribution and use in source and binary forms, with or without | |
8 | * modification, are permitted provided that the following conditions | 8 | * modification, are permitted provided that the following conditions | |
9 | * are met: | 9 | * are met: | |
10 | * 1. Redistributions of source code must retain the above copyright | 10 | * 1. Redistributions of source code must retain the above copyright | |
11 | * notice, this list of conditions and the following disclaimer. | 11 | * notice, this list of conditions and the following disclaimer. | |
12 | * 2. Redistributions in binary form must reproduce the above copyright | 12 | * 2. Redistributions in binary form must reproduce the above copyright | |
13 | * notice, this list of conditions and the following disclaimer in the | 13 | * notice, this list of conditions and the following disclaimer in the | |
14 | * documentation and/or other materials provided with the distribution. | 14 | * documentation and/or other materials provided with the distribution. | |
15 | * | 15 | * | |
16 | * THIS SOFTWARE IS PROVIDED BY THE NETBSD FOUNDATION, INC. AND CONTRIBUTORS | 16 | * THIS SOFTWARE IS PROVIDED BY THE NETBSD FOUNDATION, INC. AND CONTRIBUTORS | |
17 | * ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED | 17 | * ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED | |
18 | * TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR | 18 | * TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR | |
19 | * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS | 19 | * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS | |
20 | * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR | 20 | * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR | |
21 | * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF | 21 | * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF | |
22 | * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS | 22 | * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS | |
23 | * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN | 23 | * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN | |
24 | * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) | 24 | * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) | |
25 | * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE | 25 | * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE | |
26 | * POSSIBILITY OF SUCH DAMAGE. | 26 | * POSSIBILITY OF SUCH DAMAGE. | |
27 | */ | 27 | */ | |
28 | 28 | |||
29 | #include <machine/asm.h> | 29 | #include <machine/asm.h> | |
30 | #include "atomic_op_asm.h" | 30 | #include "atomic_op_asm.h" | |
31 | 31 | |||
32 | RCSID("$NetBSD: atomic_swap.S,v 1.7 2020/08/06 10:00:21 skrll Exp $") | 32 | RCSID("$NetBSD: atomic_swap.S,v 1.8 2022/02/27 19:21:53 riastradh Exp $") | |
33 | 33 | |||
34 | .text | 34 | .text | |
35 | .set noreorder | 35 | .set noreorder | |
36 | #ifdef _KERNEL_OPT | 36 | #ifdef _KERNEL_OPT | |
37 | #include "opt_cputype.h" | 37 | #include "opt_cputype.h" | |
38 | #ifndef MIPS3_LOONGSON2F | 38 | #ifndef MIPS3_LOONGSON2F | |
39 | .set noat | 39 | .set noat | |
40 | .set nomacro | 40 | .set nomacro | |
41 | #endif | 41 | #endif | |
42 | #else /* _KERNEL_OPT */ | 42 | #else /* _KERNEL_OPT */ | |
43 | .set noat | 43 | .set noat | |
44 | .set nomacro | 44 | .set nomacro | |
45 | #endif /* _KERNEL_OPT */ | 45 | #endif /* _KERNEL_OPT */ | |
46 | 46 | |||
47 | LEAF(_atomic_swap_32) | 47 | LEAF(_atomic_swap_32) | |
48 | LLSCSYNC | 48 | LLSCSYNC | |
49 | 1: INT_LL v0, 0(a0) | 49 | 1: INT_LL v0, 0(a0) | |
50 | nop | 50 | nop | |
51 | move t0, a1 | 51 | move t0, a1 | |
52 | INT_SC t0, 0(a0) | 52 | INT_SC t0, 0(a0) | |
53 | beq t0, zero, 1b | 53 | beq t0, zero, 1b | |
54 | nop | 54 | nop | |
55 | 2: | 55 | 2: | |
56 | j ra | 56 | j ra | |
57 | SYNCW | 57 | BDSYNC_PLUNGER | |
58 | END(_atomic_swap_32) | 58 | END(_atomic_swap_32) | |
59 | ATOMIC_OP_ALIAS(atomic_swap_32, _atomic_swap_32) | 59 | ATOMIC_OP_ALIAS(atomic_swap_32, _atomic_swap_32) | |
60 | 60 | |||
61 | #if !defined(__mips_o32) | 61 | #if !defined(__mips_o32) | |
62 | LEAF(_atomic_swap_64) | 62 | LEAF(_atomic_swap_64) | |
63 | LLSCSYNC | 63 | LLSCSYNC | |
64 | 1: REG_LL v0, 0(a0) | 64 | 1: REG_LL v0, 0(a0) | |
65 | nop | 65 | nop | |
66 | move t0, a1 | 66 | move t0, a1 | |
67 | REG_SC t0, 0(a0) | 67 | REG_SC t0, 0(a0) | |
68 | beq t0, zero, 1b | 68 | beq t0, zero, 1b | |
69 | nop | 69 | nop | |
70 | 2: | 70 | 2: | |
71 | j ra | 71 | j ra | |
72 | SYNCW | 72 | BDSYNC_PLUNGER | |
73 | END(_atomic_swap_64) | 73 | END(_atomic_swap_64) | |
74 | ATOMIC_OP_ALIAS(atomic_swap_64, _atomic_swap_64) | 74 | ATOMIC_OP_ALIAS(atomic_swap_64, _atomic_swap_64) | |
75 | #endif | 75 | #endif | |
76 | 76 | |||
77 | #ifdef _LP64 | 77 | #ifdef _LP64 | |
78 | STRONG_ALIAS(_atomic_swap_ptr, _atomic_swap_64) | 78 | STRONG_ALIAS(_atomic_swap_ptr, _atomic_swap_64) | |
79 | STRONG_ALIAS(_atomic_swap_ulong, _atomic_swap_64) | 79 | STRONG_ALIAS(_atomic_swap_ulong, _atomic_swap_64) | |
80 | #else | 80 | #else | |
81 | STRONG_ALIAS(_atomic_swap_ptr, _atomic_swap_32) | 81 | STRONG_ALIAS(_atomic_swap_ptr, _atomic_swap_32) | |
82 | STRONG_ALIAS(_atomic_swap_ulong, _atomic_swap_32) | 82 | STRONG_ALIAS(_atomic_swap_ulong, _atomic_swap_32) | |
83 | #endif | 83 | #endif | |
84 | STRONG_ALIAS(_atomic_swap_uint, _atomic_swap_32) | 84 | STRONG_ALIAS(_atomic_swap_uint, _atomic_swap_32) | |
85 | 85 | |||
86 | ATOMIC_OP_ALIAS(atomic_swap_ptr, _atomic_swap_ptr) | 86 | ATOMIC_OP_ALIAS(atomic_swap_ptr, _atomic_swap_ptr) | |
87 | ATOMIC_OP_ALIAS(atomic_swap_uint, _atomic_swap_uint) | 87 | ATOMIC_OP_ALIAS(atomic_swap_uint, _atomic_swap_uint) | |
88 | ATOMIC_OP_ALIAS(atomic_swap_ulong, _atomic_swap_ulong) | 88 | ATOMIC_OP_ALIAS(atomic_swap_ulong, _atomic_swap_ulong) |
--- src/sys/arch/mips/include/asm.h 2021/02/18 12:28:01 1.65
+++ src/sys/arch/mips/include/asm.h 2022/02/27 19:21:53 1.66
@@ -1,716 +1,731 @@ | @@ -1,716 +1,731 @@ | |||
1 | /* $NetBSD: asm.h,v 1.65 2021/02/18 12:28:01 simonb Exp $ */ | 1 | /* $NetBSD: asm.h,v 1.66 2022/02/27 19:21:53 riastradh Exp $ */ | |
2 | 2 | |||
3 | /* | 3 | /* | |
4 | * Copyright (c) 1992, 1993 | 4 | * Copyright (c) 1992, 1993 | |
5 | * The Regents of the University of California. All rights reserved. | 5 | * The Regents of the University of California. All rights reserved. | |
6 | * | 6 | * | |
7 | * This code is derived from software contributed to Berkeley by | 7 | * This code is derived from software contributed to Berkeley by | |
8 | * Ralph Campbell. | 8 | * Ralph Campbell. | |
9 | * | 9 | * | |
10 | * Redistribution and use in source and binary forms, with or without | 10 | * Redistribution and use in source and binary forms, with or without | |
11 | * modification, are permitted provided that the following conditions | 11 | * modification, are permitted provided that the following conditions | |
12 | * are met: | 12 | * are met: | |
13 | * 1. Redistributions of source code must retain the above copyright | 13 | * 1. Redistributions of source code must retain the above copyright | |
14 | * notice, this list of conditions and the following disclaimer. | 14 | * notice, this list of conditions and the following disclaimer. | |
15 | * 2. Redistributions in binary form must reproduce the above copyright | 15 | * 2. Redistributions in binary form must reproduce the above copyright | |
16 | * notice, this list of conditions and the following disclaimer in the | 16 | * notice, this list of conditions and the following disclaimer in the | |
17 | * documentation and/or other materials provided with the distribution. | 17 | * documentation and/or other materials provided with the distribution. | |
18 | * 3. Neither the name of the University nor the names of its contributors | 18 | * 3. Neither the name of the University nor the names of its contributors | |
19 | * may be used to endorse or promote products derived from this software | 19 | * may be used to endorse or promote products derived from this software | |
20 | * without specific prior written permission. | 20 | * without specific prior written permission. | |
21 | * | 21 | * | |
22 | * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND | 22 | * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND | |
23 | * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE | 23 | * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE | |
24 | * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE | 24 | * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE | |
25 | * ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE | 25 | * ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE | |
26 | * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL | 26 | * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL | |
27 | * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS | 27 | * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS | |
28 | * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) | 28 | * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) | |
29 | * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT | 29 | * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT | |
30 | * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY | 30 | * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY | |
31 | * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF | 31 | * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF | |
32 | * SUCH DAMAGE. | 32 | * SUCH DAMAGE. | |
33 | * | 33 | * | |
34 | * @(#)machAsmDefs.h 8.1 (Berkeley) 6/10/93 | 34 | * @(#)machAsmDefs.h 8.1 (Berkeley) 6/10/93 | |
35 | */ | 35 | */ | |
36 | 36 | |||
37 | /* | 37 | /* | |
38 | * machAsmDefs.h -- | 38 | * machAsmDefs.h -- | |
39 | * | 39 | * | |
40 | * Macros used when writing assembler programs. | 40 | * Macros used when writing assembler programs. | |
41 | * | 41 | * | |
42 | * Copyright (C) 1989 Digital Equipment Corporation. | 42 | * Copyright (C) 1989 Digital Equipment Corporation. | |
43 | * Permission to use, copy, modify, and distribute this software and | 43 | * Permission to use, copy, modify, and distribute this software and | |
44 | * its documentation for any purpose and without fee is hereby granted, | 44 | * its documentation for any purpose and without fee is hereby granted, | |
45 | * provided that the above copyright notice appears in all copies. | 45 | * provided that the above copyright notice appears in all copies. | |
46 | * Digital Equipment Corporation makes no representations about the | 46 | * Digital Equipment Corporation makes no representations about the | |
47 | * suitability of this software for any purpose. It is provided "as is" | 47 | * suitability of this software for any purpose. It is provided "as is" | |
48 | * without express or implied warranty. | 48 | * without express or implied warranty. | |
49 | * | 49 | * | |
50 | * from: Header: /sprite/src/kernel/mach/ds3100.md/RCS/machAsmDefs.h, | 50 | * from: Header: /sprite/src/kernel/mach/ds3100.md/RCS/machAsmDefs.h, | |
51 | * v 1.2 89/08/15 18:28:24 rab Exp SPRITE (DECWRL) | 51 | * v 1.2 89/08/15 18:28:24 rab Exp SPRITE (DECWRL) | |
52 | */ | 52 | */ | |
53 | 53 | |||
54 | #ifndef _MIPS_ASM_H | 54 | #ifndef _MIPS_ASM_H | |
55 | #define _MIPS_ASM_H | 55 | #define _MIPS_ASM_H | |
56 | 56 | |||
57 | #include <sys/cdefs.h> /* for API selection */ | 57 | #include <sys/cdefs.h> /* for API selection */ | |
58 | #include <mips/regdef.h> | 58 | #include <mips/regdef.h> | |
59 | 59 | |||
60 | #if defined(_KERNEL_OPT) | 60 | #if defined(_KERNEL_OPT) | |
61 | #include "opt_gprof.h" | 61 | #include "opt_gprof.h" | |
62 | #endif | 62 | #endif | |
63 | 63 | |||
64 | #define __BIT(n) (1 << (n)) | 64 | #define __BIT(n) (1 << (n)) | |
65 | #define __BITS(hi,lo) ((~((~0)<<((hi)+1)))&((~0)<<(lo))) | 65 | #define __BITS(hi,lo) ((~((~0)<<((hi)+1)))&((~0)<<(lo))) | |
66 | 66 | |||
67 | #define __LOWEST_SET_BIT(__mask) ((((__mask) - 1) & (__mask)) ^ (__mask)) | 67 | #define __LOWEST_SET_BIT(__mask) ((((__mask) - 1) & (__mask)) ^ (__mask)) | |
68 | #define __SHIFTOUT(__x, __mask) (((__x) & (__mask)) / __LOWEST_SET_BIT(__mask)) | 68 | #define __SHIFTOUT(__x, __mask) (((__x) & (__mask)) / __LOWEST_SET_BIT(__mask)) | |
69 | #define __SHIFTIN(__x, __mask) ((__x) * __LOWEST_SET_BIT(__mask)) | 69 | #define __SHIFTIN(__x, __mask) ((__x) * __LOWEST_SET_BIT(__mask)) | |
70 | 70 | |||
71 | /* | 71 | /* | |
72 | * Define -pg profile entry code. | 72 | * Define -pg profile entry code. | |
73 | * Must always be noreorder, must never use a macro instruction. | 73 | * Must always be noreorder, must never use a macro instruction. | |
74 | */ | 74 | */ | |
75 | #if defined(__mips_o32) /* Old 32-bit ABI */ | 75 | #if defined(__mips_o32) /* Old 32-bit ABI */ | |
76 | /* | 76 | /* | |
77 | * The old ABI version must also decrement two less words off the | 77 | * The old ABI version must also decrement two less words off the | |
78 | * stack and the final addiu to t9 must always equal the size of this | 78 | * stack and the final addiu to t9 must always equal the size of this | |
79 | * _MIPS_ASM_MCOUNT. | 79 | * _MIPS_ASM_MCOUNT. | |
80 | */ | 80 | */ | |
81 | #define _MIPS_ASM_MCOUNT \ | 81 | #define _MIPS_ASM_MCOUNT \ | |
82 | .set push; \ | 82 | .set push; \ | |
83 | .set noreorder; \ | 83 | .set noreorder; \ | |
84 | .set noat; \ | 84 | .set noat; \ | |
85 | subu sp,16; \ | 85 | subu sp,16; \ | |
86 | sw t9,12(sp); \ | 86 | sw t9,12(sp); \ | |
87 | move AT,ra; \ | 87 | move AT,ra; \ | |
88 | lui t9,%hi(_mcount); \ | 88 | lui t9,%hi(_mcount); \ | |
89 | addiu t9,t9,%lo(_mcount); \ | 89 | addiu t9,t9,%lo(_mcount); \ | |
90 | jalr t9; \ | 90 | jalr t9; \ | |
91 | nop; \ | 91 | nop; \ | |
92 | lw t9,4(sp); \ | 92 | lw t9,4(sp); \ | |
93 | addiu sp,8; \ | 93 | addiu sp,8; \ | |
94 | addiu t9,40; \ | 94 | addiu t9,40; \ | |
95 | .set pop; | 95 | .set pop; | |
96 | #elif defined(__mips_o64) /* Old 64-bit ABI */ | 96 | #elif defined(__mips_o64) /* Old 64-bit ABI */ | |
97 | # error yeahnah | 97 | # error yeahnah | |
98 | #else /* New (n32/n64) ABI */ | 98 | #else /* New (n32/n64) ABI */ | |
99 | /* | 99 | /* | |
100 | * The new ABI version just needs to put the return address in AT and | 100 | * The new ABI version just needs to put the return address in AT and | |
101 | * call _mcount(). For the no abicalls case, skip the reloc dance. | 101 | * call _mcount(). For the no abicalls case, skip the reloc dance. | |
102 | */ | 102 | */ | |
103 | #ifdef __mips_abicalls | 103 | #ifdef __mips_abicalls | |
104 | #define _MIPS_ASM_MCOUNT \ | 104 | #define _MIPS_ASM_MCOUNT \ | |
105 | .set push; \ | 105 | .set push; \ | |
106 | .set noreorder; \ | 106 | .set noreorder; \ | |
107 | .set noat; \ | 107 | .set noat; \ | |
108 | subu sp,16; \ | 108 | subu sp,16; \ | |
109 | sw t9,8(sp); \ | 109 | sw t9,8(sp); \ | |
110 | move AT,ra; \ | 110 | move AT,ra; \ | |
111 | lui t9,%hi(_mcount); \ | 111 | lui t9,%hi(_mcount); \ | |
112 | addiu t9,t9,%lo(_mcount); \ | 112 | addiu t9,t9,%lo(_mcount); \ | |
113 | jalr t9; \ | 113 | jalr t9; \ | |
114 | nop; \ | 114 | nop; \ | |
115 | lw t9,8(sp); \ | 115 | lw t9,8(sp); \ | |
116 | addiu sp,16; \ | 116 | addiu sp,16; \ | |
117 | .set pop; | 117 | .set pop; | |
118 | #else /* !__mips_abicalls */ | 118 | #else /* !__mips_abicalls */ | |
119 | #define _MIPS_ASM_MCOUNT \ | 119 | #define _MIPS_ASM_MCOUNT \ | |
120 | .set push; \ | 120 | .set push; \ | |
121 | .set noreorder; \ | 121 | .set noreorder; \ | |
122 | .set noat; \ | 122 | .set noat; \ | |
123 | move AT,ra; \ | 123 | move AT,ra; \ | |
124 | jal _mcount; \ | 124 | jal _mcount; \ | |
125 | nop; \ | 125 | nop; \ | |
126 | .set pop; | 126 | .set pop; | |
127 | #endif /* !__mips_abicalls */ | 127 | #endif /* !__mips_abicalls */ | |
128 | #endif /* n32/n64 */ | 128 | #endif /* n32/n64 */ | |
129 | 129 | |||
130 | #ifdef GPROF | 130 | #ifdef GPROF | |
131 | #define MCOUNT _MIPS_ASM_MCOUNT | 131 | #define MCOUNT _MIPS_ASM_MCOUNT | |
132 | #else | 132 | #else | |
133 | #define MCOUNT | 133 | #define MCOUNT | |
134 | #endif | 134 | #endif | |
135 | 135 | |||
136 | #ifdef USE_AENT | 136 | #ifdef USE_AENT | |
137 | #define AENT(x) \ | 137 | #define AENT(x) \ | |
138 | .aent x, 0 | 138 | .aent x, 0 | |
139 | #else | 139 | #else | |
140 | #define AENT(x) | 140 | #define AENT(x) | |
141 | #endif | 141 | #endif | |
142 | 142 | |||
143 | /* | 143 | /* | |
144 | * WEAK_ALIAS: create a weak alias. | 144 | * WEAK_ALIAS: create a weak alias. | |
145 | */ | 145 | */ | |
146 | #define WEAK_ALIAS(alias,sym) \ | 146 | #define WEAK_ALIAS(alias,sym) \ | |
147 | .weak alias; \ | 147 | .weak alias; \ | |
148 | alias = sym | 148 | alias = sym | |
149 | /* | 149 | /* | |
150 | * STRONG_ALIAS: create a strong alias. | 150 | * STRONG_ALIAS: create a strong alias. | |
151 | */ | 151 | */ | |
152 | #define STRONG_ALIAS(alias,sym) \ | 152 | #define STRONG_ALIAS(alias,sym) \ | |
153 | .globl alias; \ | 153 | .globl alias; \ | |
154 | alias = sym | 154 | alias = sym | |
155 | 155 | |||
156 | /* | 156 | /* | |
157 | * WARN_REFERENCES: create a warning if the specified symbol is referenced. | 157 | * WARN_REFERENCES: create a warning if the specified symbol is referenced. | |
158 | */ | 158 | */ | |
159 | #define WARN_REFERENCES(sym,msg) \ | 159 | #define WARN_REFERENCES(sym,msg) \ | |
160 | .pushsection __CONCAT(.gnu.warning.,sym); \ | 160 | .pushsection __CONCAT(.gnu.warning.,sym); \ | |
161 | .ascii msg; \ | 161 | .ascii msg; \ | |
162 | .popsection | 162 | .popsection | |
163 | 163 | |||
164 | /* | 164 | /* | |
165 | * STATIC_LEAF_NOPROFILE | 165 | * STATIC_LEAF_NOPROFILE | |
166 | * No profilable local leaf routine. | 166 | * No profilable local leaf routine. | |
167 | */ | 167 | */ | |
168 | #define STATIC_LEAF_NOPROFILE(x) \ | 168 | #define STATIC_LEAF_NOPROFILE(x) \ | |
169 | .ent _C_LABEL(x); \ | 169 | .ent _C_LABEL(x); \ | |
170 | _C_LABEL(x): ; \ | 170 | _C_LABEL(x): ; \ | |
171 | .frame sp, 0, ra | 171 | .frame sp, 0, ra | |
172 | 172 | |||
173 | /* | 173 | /* | |
174 | * LEAF_NOPROFILE | 174 | * LEAF_NOPROFILE | |
175 | * No profilable leaf routine. | 175 | * No profilable leaf routine. | |
176 | */ | 176 | */ | |
177 | #define LEAF_NOPROFILE(x) \ | 177 | #define LEAF_NOPROFILE(x) \ | |
178 | .globl _C_LABEL(x); \ | 178 | .globl _C_LABEL(x); \ | |
179 | STATIC_LEAF_NOPROFILE(x) | 179 | STATIC_LEAF_NOPROFILE(x) | |
180 | 180 | |||
181 | /* | 181 | /* | |
182 | * STATIC_LEAF | 182 | * STATIC_LEAF | |
183 | * Declare a local leaf function. | 183 | * Declare a local leaf function. | |
184 | */ | 184 | */ | |
185 | #define STATIC_LEAF(x) \ | 185 | #define STATIC_LEAF(x) \ | |
186 | STATIC_LEAF_NOPROFILE(x); \ | 186 | STATIC_LEAF_NOPROFILE(x); \ | |
187 | MCOUNT | 187 | MCOUNT | |
188 | 188 | |||
189 | /* | 189 | /* | |
190 | * LEAF | 190 | * LEAF | |
191 | * A leaf routine does | 191 | * A leaf routine does | |
192 | * - call no other function, | 192 | * - call no other function, | |
193 | * - never use any register that callee-saved (S0-S8), and | 193 | * - never use any register that callee-saved (S0-S8), and | |
194 | * - not use any local stack storage. | 194 | * - not use any local stack storage. | |
195 | */ | 195 | */ | |
196 | #define LEAF(x) \ | 196 | #define LEAF(x) \ | |
197 | LEAF_NOPROFILE(x); \ | 197 | LEAF_NOPROFILE(x); \ | |
198 | MCOUNT | 198 | MCOUNT | |
199 | 199 | |||
200 | /* | 200 | /* | |
201 | * STATIC_XLEAF | 201 | * STATIC_XLEAF | |
202 | * declare alternate entry to a static leaf routine | 202 | * declare alternate entry to a static leaf routine | |
203 | */ | 203 | */ | |
204 | #define STATIC_XLEAF(x) \ | 204 | #define STATIC_XLEAF(x) \ | |
205 | AENT (_C_LABEL(x)); \ | 205 | AENT (_C_LABEL(x)); \ | |
206 | _C_LABEL(x): | 206 | _C_LABEL(x): | |
207 | 207 | |||
208 | /* | 208 | /* | |
209 | * XLEAF | 209 | * XLEAF | |
210 | * declare alternate entry to leaf routine | 210 | * declare alternate entry to leaf routine | |
211 | */ | 211 | */ | |
212 | #define XLEAF(x) \ | 212 | #define XLEAF(x) \ | |
213 | .globl _C_LABEL(x); \ | 213 | .globl _C_LABEL(x); \ | |
214 | STATIC_XLEAF(x) | 214 | STATIC_XLEAF(x) | |
215 | 215 | |||
216 | /* | 216 | /* | |
217 | * STATIC_NESTED_NOPROFILE | 217 | * STATIC_NESTED_NOPROFILE | |
218 | * No profilable local nested routine. | 218 | * No profilable local nested routine. | |
219 | */ | 219 | */ | |
220 | #define STATIC_NESTED_NOPROFILE(x, fsize, retpc) \ | 220 | #define STATIC_NESTED_NOPROFILE(x, fsize, retpc) \ | |
221 | .ent _C_LABEL(x); \ | 221 | .ent _C_LABEL(x); \ | |
222 | .type _C_LABEL(x), @function; \ | 222 | .type _C_LABEL(x), @function; \ | |
223 | _C_LABEL(x): ; \ | 223 | _C_LABEL(x): ; \ | |
224 | .frame sp, fsize, retpc | 224 | .frame sp, fsize, retpc | |
225 | 225 | |||
226 | /* | 226 | /* | |
227 | * NESTED_NOPROFILE | 227 | * NESTED_NOPROFILE | |
228 | * No profilable nested routine. | 228 | * No profilable nested routine. | |
229 | */ | 229 | */ | |
230 | #define NESTED_NOPROFILE(x, fsize, retpc) \ | 230 | #define NESTED_NOPROFILE(x, fsize, retpc) \ | |
231 | .globl _C_LABEL(x); \ | 231 | .globl _C_LABEL(x); \ | |
232 | STATIC_NESTED_NOPROFILE(x, fsize, retpc) | 232 | STATIC_NESTED_NOPROFILE(x, fsize, retpc) | |
233 | 233 | |||
234 | /* | 234 | /* | |
235 | * NESTED | 235 | * NESTED | |
236 | * A function calls other functions and needs | 236 | * A function calls other functions and needs | |
237 | * therefore stack space to save/restore registers. | 237 | * therefore stack space to save/restore registers. | |
238 | */ | 238 | */ | |
239 | #define NESTED(x, fsize, retpc) \ | 239 | #define NESTED(x, fsize, retpc) \ | |
240 | NESTED_NOPROFILE(x, fsize, retpc); \ | 240 | NESTED_NOPROFILE(x, fsize, retpc); \ | |
241 | MCOUNT | 241 | MCOUNT | |
242 | 242 | |||
243 | /* | 243 | /* | |
244 | * STATIC_NESTED | 244 | * STATIC_NESTED | |
245 | * No profilable local nested routine. | 245 | * No profilable local nested routine. | |
246 | */ | 246 | */ | |
247 | #define STATIC_NESTED(x, fsize, retpc) \ | 247 | #define STATIC_NESTED(x, fsize, retpc) \ | |
248 | STATIC_NESTED_NOPROFILE(x, fsize, retpc); \ | 248 | STATIC_NESTED_NOPROFILE(x, fsize, retpc); \ | |
249 | MCOUNT | 249 | MCOUNT | |
250 | 250 | |||
251 | /* | 251 | /* | |
252 | * XNESTED | 252 | * XNESTED | |
253 | * declare alternate entry point to nested routine. | 253 | * declare alternate entry point to nested routine. | |
254 | */ | 254 | */ | |
255 | #define XNESTED(x) \ | 255 | #define XNESTED(x) \ | |
256 | .globl _C_LABEL(x); \ | 256 | .globl _C_LABEL(x); \ | |
257 | AENT (_C_LABEL(x)); \ | 257 | AENT (_C_LABEL(x)); \ | |
258 | _C_LABEL(x): | 258 | _C_LABEL(x): | |
259 | 259 | |||
260 | /* | 260 | /* | |
261 | * END | 261 | * END | |
262 | * Mark end of a procedure. | 262 | * Mark end of a procedure. | |
263 | */ | 263 | */ | |
264 | #define END(x) \ | 264 | #define END(x) \ | |
265 | .end _C_LABEL(x); \ | 265 | .end _C_LABEL(x); \ | |
266 | .size _C_LABEL(x), . - _C_LABEL(x) | 266 | .size _C_LABEL(x), . - _C_LABEL(x) | |
267 | 267 | |||
268 | /* | 268 | /* | |
269 | * IMPORT -- import external symbol | 269 | * IMPORT -- import external symbol | |
270 | */ | 270 | */ | |
271 | #define IMPORT(sym, size) \ | 271 | #define IMPORT(sym, size) \ | |
272 | .extern _C_LABEL(sym),size | 272 | .extern _C_LABEL(sym),size | |
273 | 273 | |||
274 | /* | 274 | /* | |
275 | * EXPORT -- export definition of symbol | 275 | * EXPORT -- export definition of symbol | |
276 | */ | 276 | */ | |
277 | #define EXPORT(x) \ | 277 | #define EXPORT(x) \ | |
278 | .globl _C_LABEL(x); \ | 278 | .globl _C_LABEL(x); \ | |
279 | _C_LABEL(x): | 279 | _C_LABEL(x): | |
280 | 280 | |||
281 | /* | 281 | /* | |
282 | * EXPORT_OBJECT -- export definition of symbol of symbol | 282 | * EXPORT_OBJECT -- export definition of symbol of symbol | |
283 | * type Object, visible to ksyms(4) address search. | 283 | * type Object, visible to ksyms(4) address search. | |
284 | */ | 284 | */ | |
285 | #define EXPORT_OBJECT(x) \ | 285 | #define EXPORT_OBJECT(x) \ | |
286 | EXPORT(x); \ | 286 | EXPORT(x); \ | |
287 | .type _C_LABEL(x), @object; | 287 | .type _C_LABEL(x), @object; | |
288 | 288 | |||
289 | /* | 289 | /* | |
290 | * VECTOR | 290 | * VECTOR | |
291 | * exception vector entrypoint | 291 | * exception vector entrypoint | |
292 | * XXX: regmask should be used to generate .mask | 292 | * XXX: regmask should be used to generate .mask | |
293 | */ | 293 | */ | |
294 | #define VECTOR(x, regmask) \ | 294 | #define VECTOR(x, regmask) \ | |
295 | .ent _C_LABEL(x); \ | 295 | .ent _C_LABEL(x); \ | |
296 | EXPORT(x); \ | 296 | EXPORT(x); \ | |
297 | 297 | |||
298 | #define VECTOR_END(x) \ | 298 | #define VECTOR_END(x) \ | |
299 | EXPORT(__CONCAT(x,_end)); \ | 299 | EXPORT(__CONCAT(x,_end)); \ | |
300 | END(x); \ | 300 | END(x); \ | |
301 | .org _C_LABEL(x) + 0x80 | 301 | .org _C_LABEL(x) + 0x80 | |
302 | 302 | |||
303 | /* | 303 | /* | |
304 | * Macros to panic and printf from assembly language. | 304 | * Macros to panic and printf from assembly language. | |
305 | */ | 305 | */ | |
306 | #define PANIC(msg) \ | 306 | #define PANIC(msg) \ | |
307 | PTR_LA a0, 9f; \ | 307 | PTR_LA a0, 9f; \ | |
308 | jal _C_LABEL(panic); \ | 308 | jal _C_LABEL(panic); \ | |
309 | nop; \ | 309 | nop; \ | |
310 | MSG(msg) | 310 | MSG(msg) | |
311 | 311 | |||
312 | #define PRINTF(msg) \ | 312 | #define PRINTF(msg) \ | |
313 | PTR_LA a0, 9f; \ | 313 | PTR_LA a0, 9f; \ | |
314 | jal _C_LABEL(printf); \ | 314 | jal _C_LABEL(printf); \ | |
315 | nop; \ | 315 | nop; \ | |
316 | MSG(msg) | 316 | MSG(msg) | |
317 | 317 | |||
318 | #define MSG(msg) \ | 318 | #define MSG(msg) \ | |
319 | .rdata; \ | 319 | .rdata; \ | |
320 | 9: .asciz msg; \ | 320 | 9: .asciz msg; \ | |
321 | .text | 321 | .text | |
322 | 322 | |||
323 | #define ASMSTR(str) \ | 323 | #define ASMSTR(str) \ | |
324 | .asciz str; \ | 324 | .asciz str; \ | |
325 | .align 3 | 325 | .align 3 | |
326 | 326 | |||
327 | #define RCSID(x) .pushsection ".ident","MS",@progbits,1; \ | 327 | #define RCSID(x) .pushsection ".ident","MS",@progbits,1; \ | |
328 | .asciz x; \ | 328 | .asciz x; \ | |
329 | .popsection | 329 | .popsection | |
330 | 330 | |||
331 | /* | 331 | /* | |
332 | * XXX retain dialects XXX | 332 | * XXX retain dialects XXX | |
333 | */ | 333 | */ | |
334 | #define ALEAF(x) XLEAF(x) | 334 | #define ALEAF(x) XLEAF(x) | |
335 | #define NLEAF(x) LEAF_NOPROFILE(x) | 335 | #define NLEAF(x) LEAF_NOPROFILE(x) | |
336 | #define NON_LEAF(x, fsize, retpc) NESTED(x, fsize, retpc) | 336 | #define NON_LEAF(x, fsize, retpc) NESTED(x, fsize, retpc) | |
337 | #define NNON_LEAF(x, fsize, retpc) NESTED_NOPROFILE(x, fsize, retpc) | 337 | #define NNON_LEAF(x, fsize, retpc) NESTED_NOPROFILE(x, fsize, retpc) | |
338 | 338 | |||
339 | #if defined(__mips_o32) | 339 | #if defined(__mips_o32) | |
340 | #define SZREG 4 | 340 | #define SZREG 4 | |
341 | #else | 341 | #else | |
342 | #define SZREG 8 | 342 | #define SZREG 8 | |
343 | #endif | 343 | #endif | |
344 | 344 | |||
345 | #if defined(__mips_o32) || defined(__mips_o64) | 345 | #if defined(__mips_o32) || defined(__mips_o64) | |
346 | #define ALSK 7 /* stack alignment */ | 346 | #define ALSK 7 /* stack alignment */ | |
347 | #define ALMASK -7 /* stack alignment */ | 347 | #define ALMASK -7 /* stack alignment */ | |
348 | #define SZFPREG 4 | 348 | #define SZFPREG 4 | |
349 | #define FP_L lwc1 | 349 | #define FP_L lwc1 | |
350 | #define FP_S swc1 | 350 | #define FP_S swc1 | |
351 | #else | 351 | #else | |
352 | #define ALSK 15 /* stack alignment */ | 352 | #define ALSK 15 /* stack alignment */ | |
353 | #define ALMASK -15 /* stack alignment */ | 353 | #define ALMASK -15 /* stack alignment */ | |
354 | #define SZFPREG 8 | 354 | #define SZFPREG 8 | |
355 | #define FP_L ldc1 | 355 | #define FP_L ldc1 | |
356 | #define FP_S sdc1 | 356 | #define FP_S sdc1 | |
357 | #endif | 357 | #endif | |
358 | 358 | |||
359 | /* | 359 | /* | |
360 | * standard callframe { | 360 | * standard callframe { | |
361 | * register_t cf_args[4]; arg0 - arg3 (only on o32 and o64) | 361 | * register_t cf_args[4]; arg0 - arg3 (only on o32 and o64) | |
362 | * register_t cf_pad[N]; o32/64 (N=0), n32 (N=1) n64 (N=1) | 362 | * register_t cf_pad[N]; o32/64 (N=0), n32 (N=1) n64 (N=1) | |
363 | * register_t cf_gp; global pointer (only on n32 and n64) | 363 | * register_t cf_gp; global pointer (only on n32 and n64) | |
364 | * register_t cf_sp; frame pointer | 364 | * register_t cf_sp; frame pointer | |
365 | * register_t cf_ra; return address | 365 | * register_t cf_ra; return address | |
366 | * }; | 366 | * }; | |
367 | */ | 367 | */ | |
368 | #if defined(__mips_o32) || defined(__mips_o64) | 368 | #if defined(__mips_o32) || defined(__mips_o64) | |
369 | #define CALLFRAME_SIZ (SZREG * (4 + 2)) | 369 | #define CALLFRAME_SIZ (SZREG * (4 + 2)) | |
370 | #define CALLFRAME_S0 0 | 370 | #define CALLFRAME_S0 0 | |
371 | #elif defined(__mips_n32) || defined(__mips_n64) | 371 | #elif defined(__mips_n32) || defined(__mips_n64) | |
372 | #define CALLFRAME_SIZ (SZREG * 4) | 372 | #define CALLFRAME_SIZ (SZREG * 4) | |
373 | #define CALLFRAME_S0 (CALLFRAME_SIZ - 4 * SZREG) | 373 | #define CALLFRAME_S0 (CALLFRAME_SIZ - 4 * SZREG) | |
374 | #endif | 374 | #endif | |
375 | #ifndef _KERNEL | 375 | #ifndef _KERNEL | |
376 | #define CALLFRAME_GP (CALLFRAME_SIZ - 3 * SZREG) | 376 | #define CALLFRAME_GP (CALLFRAME_SIZ - 3 * SZREG) | |
377 | #endif | 377 | #endif | |
378 | #define CALLFRAME_SP (CALLFRAME_SIZ - 2 * SZREG) | 378 | #define CALLFRAME_SP (CALLFRAME_SIZ - 2 * SZREG) | |
379 | #define CALLFRAME_RA (CALLFRAME_SIZ - 1 * SZREG) | 379 | #define CALLFRAME_RA (CALLFRAME_SIZ - 1 * SZREG) | |
380 | 380 | |||
381 | /* | 381 | /* | |
382 | * While it would be nice to be compatible with the SGI | 382 | * While it would be nice to be compatible with the SGI | |
383 | * REG_L and REG_S macros, because they do not take parameters, it | 383 | * REG_L and REG_S macros, because they do not take parameters, it | |
384 | * is impossible to use them with the _MIPS_SIM_ABIX32 model. | 384 | * is impossible to use them with the _MIPS_SIM_ABIX32 model. | |
385 | * | 385 | * | |
386 | * These macros hide the use of mips3 instructions from the | 386 | * These macros hide the use of mips3 instructions from the | |
387 | * assembler to prevent the assembler from generating 64-bit style | 387 | * assembler to prevent the assembler from generating 64-bit style | |
388 | * ABI calls. | 388 | * ABI calls. | |
389 | */ | 389 | */ | |
390 | #ifdef __mips_o32 | 390 | #ifdef __mips_o32 | |
391 | #define PTR_ADD add | 391 | #define PTR_ADD add | |
392 | #define PTR_ADDI addi | 392 | #define PTR_ADDI addi | |
393 | #define PTR_ADDU addu | 393 | #define PTR_ADDU addu | |
394 | #define PTR_ADDIU addiu | 394 | #define PTR_ADDIU addiu | |
395 | #define PTR_SUB subu | 395 | #define PTR_SUB subu | |
396 | #define PTR_SUBI subi | 396 | #define PTR_SUBI subi | |
397 | #define PTR_SUBU subu | 397 | #define PTR_SUBU subu | |
398 | #define PTR_SUBIU subu | 398 | #define PTR_SUBIU subu | |
399 | #define PTR_L lw | 399 | #define PTR_L lw | |
400 | #define PTR_LA la | 400 | #define PTR_LA la | |
401 | #define PTR_S sw | 401 | #define PTR_S sw | |
402 | #define PTR_SLL sll | 402 | #define PTR_SLL sll | |
403 | #define PTR_SLLV sllv | 403 | #define PTR_SLLV sllv | |
404 | #define PTR_SRL srl | 404 | #define PTR_SRL srl | |
405 | #define PTR_SRLV srlv | 405 | #define PTR_SRLV srlv | |
406 | #define PTR_SRA sra | 406 | #define PTR_SRA sra | |
407 | #define PTR_SRAV srav | 407 | #define PTR_SRAV srav | |
408 | #define PTR_LL ll | 408 | #define PTR_LL ll | |
409 | #define PTR_SC sc | 409 | #define PTR_SC sc | |
410 | #define PTR_WORD .word | 410 | #define PTR_WORD .word | |
411 | #define PTR_SCALESHIFT 2 | 411 | #define PTR_SCALESHIFT 2 | |
412 | #else /* _MIPS_SZPTR == 64 */ | 412 | #else /* _MIPS_SZPTR == 64 */ | |
413 | #define PTR_ADD dadd | 413 | #define PTR_ADD dadd | |
414 | #define PTR_ADDI daddi | 414 | #define PTR_ADDI daddi | |
415 | #define PTR_ADDU daddu | 415 | #define PTR_ADDU daddu | |
416 | #define PTR_ADDIU daddiu | 416 | #define PTR_ADDIU daddiu | |
417 | #define PTR_SUB dsubu | 417 | #define PTR_SUB dsubu | |
418 | #define PTR_SUBI dsubi | 418 | #define PTR_SUBI dsubi | |
419 | #define PTR_SUBU dsubu | 419 | #define PTR_SUBU dsubu | |
420 | #define PTR_SUBIU dsubu | 420 | #define PTR_SUBIU dsubu | |
421 | #ifdef __mips_n32 | 421 | #ifdef __mips_n32 | |
422 | #define PTR_L lw | 422 | #define PTR_L lw | |
423 | #define PTR_LL ll | 423 | #define PTR_LL ll | |
424 | #define PTR_SC sc | 424 | #define PTR_SC sc | |
425 | #define PTR_S sw | 425 | #define PTR_S sw | |
426 | #define PTR_SCALESHIFT 2 | 426 | #define PTR_SCALESHIFT 2 | |
427 | #define PTR_WORD .word | 427 | #define PTR_WORD .word | |
428 | #else | 428 | #else | |
429 | #define PTR_L ld | 429 | #define PTR_L ld | |
430 | #define PTR_LL lld | 430 | #define PTR_LL lld | |
431 | #define PTR_SC scd | 431 | #define PTR_SC scd | |
432 | #define PTR_S sd | 432 | #define PTR_S sd | |
433 | #define PTR_SCALESHIFT 3 | 433 | #define PTR_SCALESHIFT 3 | |
434 | #define PTR_WORD .dword | 434 | #define PTR_WORD .dword | |
435 | #endif | 435 | #endif | |
436 | #define PTR_LA dla | 436 | #define PTR_LA dla | |
437 | #define PTR_SLL dsll | 437 | #define PTR_SLL dsll | |
438 | #define PTR_SLLV dsllv | 438 | #define PTR_SLLV dsllv | |
439 | #define PTR_SRL dsrl | 439 | #define PTR_SRL dsrl | |
440 | #define PTR_SRLV dsrlv | 440 | #define PTR_SRLV dsrlv | |
441 | #define PTR_SRA dsra | 441 | #define PTR_SRA dsra | |
442 | #define PTR_SRAV dsrav | 442 | #define PTR_SRAV dsrav | |
443 | #endif /* _MIPS_SZPTR == 64 */ | 443 | #endif /* _MIPS_SZPTR == 64 */ | |
444 | 444 | |||
445 | #if _MIPS_SZINT == 32 | 445 | #if _MIPS_SZINT == 32 | |
446 | #define INT_ADD add | 446 | #define INT_ADD add | |
447 | #define INT_ADDI addi | 447 | #define INT_ADDI addi | |
448 | #define INT_ADDU addu | 448 | #define INT_ADDU addu | |
449 | #define INT_ADDIU addiu | 449 | #define INT_ADDIU addiu | |
450 | #define INT_SUB subu | 450 | #define INT_SUB subu | |
451 | #define INT_SUBI subi | 451 | #define INT_SUBI subi | |
452 | #define INT_SUBU subu | 452 | #define INT_SUBU subu | |
453 | #define INT_SUBIU subu | 453 | #define INT_SUBIU subu | |
454 | #define INT_L lw | 454 | #define INT_L lw | |
455 | #define INT_LA la | 455 | #define INT_LA la | |
456 | #define INT_S sw | 456 | #define INT_S sw | |
457 | #define INT_SLL sll | 457 | #define INT_SLL sll | |
458 | #define INT_SLLV sllv | 458 | #define INT_SLLV sllv | |
459 | #define INT_SRL srl | 459 | #define INT_SRL srl | |
460 | #define INT_SRLV srlv | 460 | #define INT_SRLV srlv | |
461 | #define INT_SRA sra | 461 | #define INT_SRA sra | |
462 | #define INT_SRAV srav | 462 | #define INT_SRAV srav | |
463 | #define INT_LL ll | 463 | #define INT_LL ll | |
464 | #define INT_SC sc | 464 | #define INT_SC sc | |
465 | #define INT_WORD .word | 465 | #define INT_WORD .word | |
466 | #define INT_SCALESHIFT 2 | 466 | #define INT_SCALESHIFT 2 | |
467 | #else | 467 | #else | |
468 | #define INT_ADD dadd | 468 | #define INT_ADD dadd | |
469 | #define INT_ADDI daddi | 469 | #define INT_ADDI daddi | |
470 | #define INT_ADDU daddu | 470 | #define INT_ADDU daddu | |
471 | #define INT_ADDIU daddiu | 471 | #define INT_ADDIU daddiu | |
472 | #define INT_SUB dsubu | 472 | #define INT_SUB dsubu | |
473 | #define INT_SUBI dsubi | 473 | #define INT_SUBI dsubi | |
474 | #define INT_SUBU dsubu | 474 | #define INT_SUBU dsubu | |
475 | #define INT_SUBIU dsubu | 475 | #define INT_SUBIU dsubu | |
476 | #define INT_L ld | 476 | #define INT_L ld | |
477 | #define INT_LA dla | 477 | #define INT_LA dla | |
478 | #define INT_S sd | 478 | #define INT_S sd | |
479 | #define INT_SLL dsll | 479 | #define INT_SLL dsll | |
480 | #define INT_SLLV dsllv | 480 | #define INT_SLLV dsllv | |
481 | #define INT_SRL dsrl | 481 | #define INT_SRL dsrl | |
482 | #define INT_SRLV dsrlv | 482 | #define INT_SRLV dsrlv | |
483 | #define INT_SRA dsra | 483 | #define INT_SRA dsra | |
484 | #define INT_SRAV dsrav | 484 | #define INT_SRAV dsrav | |
485 | #define INT_LL lld | 485 | #define INT_LL lld | |
486 | #define INT_SC scd | 486 | #define INT_SC scd | |
487 | #define INT_WORD .dword | 487 | #define INT_WORD .dword | |
488 | #define INT_SCALESHIFT 3 | 488 | #define INT_SCALESHIFT 3 | |
489 | #endif | 489 | #endif | |
490 | 490 | |||
491 | #if _MIPS_SZLONG == 32 | 491 | #if _MIPS_SZLONG == 32 | |
492 | #define LONG_ADD add | 492 | #define LONG_ADD add | |
493 | #define LONG_ADDI addi | 493 | #define LONG_ADDI addi | |
494 | #define LONG_ADDU addu | 494 | #define LONG_ADDU addu | |
495 | #define LONG_ADDIU addiu | 495 | #define LONG_ADDIU addiu | |
496 | #define LONG_SUB subu | 496 | #define LONG_SUB subu | |
497 | #define LONG_SUBI subi | 497 | #define LONG_SUBI subi | |
498 | #define LONG_SUBU subu | 498 | #define LONG_SUBU subu | |
499 | #define LONG_SUBIU subu | 499 | #define LONG_SUBIU subu | |
500 | #define LONG_L lw | 500 | #define LONG_L lw | |
501 | #define LONG_LA la | 501 | #define LONG_LA la | |
502 | #define LONG_S sw | 502 | #define LONG_S sw | |
503 | #define LONG_SLL sll | 503 | #define LONG_SLL sll | |
504 | #define LONG_SLLV sllv | 504 | #define LONG_SLLV sllv | |
505 | #define LONG_SRL srl | 505 | #define LONG_SRL srl | |
506 | #define LONG_SRLV srlv | 506 | #define LONG_SRLV srlv | |
507 | #define LONG_SRA sra | 507 | #define LONG_SRA sra | |
508 | #define LONG_SRAV srav | 508 | #define LONG_SRAV srav | |
509 | #define LONG_LL ll | 509 | #define LONG_LL ll | |
510 | #define LONG_SC sc | 510 | #define LONG_SC sc | |
511 | #define LONG_WORD .word | 511 | #define LONG_WORD .word | |
512 | #define LONG_SCALESHIFT 2 | 512 | #define LONG_SCALESHIFT 2 | |
513 | #else | 513 | #else | |
514 | #define LONG_ADD dadd | 514 | #define LONG_ADD dadd | |
515 | #define LONG_ADDI daddi | 515 | #define LONG_ADDI daddi | |
516 | #define LONG_ADDU daddu | 516 | #define LONG_ADDU daddu | |
517 | #define LONG_ADDIU daddiu | 517 | #define LONG_ADDIU daddiu | |
518 | #define LONG_SUB dsubu | 518 | #define LONG_SUB dsubu | |
519 | #define LONG_SUBI dsubi | 519 | #define LONG_SUBI dsubi | |
520 | #define LONG_SUBU dsubu | 520 | #define LONG_SUBU dsubu | |
521 | #define LONG_SUBIU dsubu | 521 | #define LONG_SUBIU dsubu | |
522 | #define LONG_L ld | 522 | #define LONG_L ld | |
523 | #define LONG_LA dla | 523 | #define LONG_LA dla | |
524 | #define LONG_S sd | 524 | #define LONG_S sd | |
525 | #define LONG_SLL dsll | 525 | #define LONG_SLL dsll | |
526 | #define LONG_SLLV dsllv | 526 | #define LONG_SLLV dsllv | |
527 | #define LONG_SRL dsrl | 527 | #define LONG_SRL dsrl | |
528 | #define LONG_SRLV dsrlv | 528 | #define LONG_SRLV dsrlv | |
529 | #define LONG_SRA dsra | 529 | #define LONG_SRA dsra | |
530 | #define LONG_SRAV dsrav | 530 | #define LONG_SRAV dsrav | |
531 | #define LONG_LL lld | 531 | #define LONG_LL lld | |
532 | #define LONG_SC scd | 532 | #define LONG_SC scd | |
533 | #define LONG_WORD .dword | 533 | #define LONG_WORD .dword | |
534 | #define LONG_SCALESHIFT 3 | 534 | #define LONG_SCALESHIFT 3 | |
535 | #endif | 535 | #endif | |
536 | 536 | |||
537 | #if SZREG == 4 | 537 | #if SZREG == 4 | |
538 | #define REG_L lw | 538 | #define REG_L lw | |
539 | #define REG_S sw | 539 | #define REG_S sw | |
540 | #define REG_LI li | 540 | #define REG_LI li | |
541 | #define REG_ADDU addu | 541 | #define REG_ADDU addu | |
542 | #define REG_SLL sll | 542 | #define REG_SLL sll | |
543 | #define REG_SLLV sllv | 543 | #define REG_SLLV sllv | |
544 | #define REG_SRL srl | 544 | #define REG_SRL srl | |
545 | #define REG_SRLV srlv | 545 | #define REG_SRLV srlv | |
546 | #define REG_SRA sra | 546 | #define REG_SRA sra | |
547 | #define REG_SRAV srav | 547 | #define REG_SRAV srav | |
548 | #define REG_LL ll | 548 | #define REG_LL ll | |
549 | #define REG_SC sc | 549 | #define REG_SC sc | |
550 | #define REG_SCALESHIFT 2 | 550 | #define REG_SCALESHIFT 2 | |
551 | #else | 551 | #else | |
552 | #define REG_L ld | 552 | #define REG_L ld | |
553 | #define REG_S sd | 553 | #define REG_S sd | |
554 | #define REG_LI dli | 554 | #define REG_LI dli | |
555 | #define REG_ADDU daddu | 555 | #define REG_ADDU daddu | |
556 | #define REG_SLL dsll | 556 | #define REG_SLL dsll | |
557 | #define REG_SLLV dsllv | 557 | #define REG_SLLV dsllv | |
558 | #define REG_SRL dsrl | 558 | #define REG_SRL dsrl | |
559 | #define REG_SRLV dsrlv | 559 | #define REG_SRLV dsrlv | |
560 | #define REG_SRA dsra | 560 | #define REG_SRA dsra | |
561 | #define REG_SRAV dsrav | 561 | #define REG_SRAV dsrav | |
562 | #define REG_LL lld | 562 | #define REG_LL lld | |
563 | #define REG_SC scd | 563 | #define REG_SC scd | |
564 | #define REG_SCALESHIFT 3 | 564 | #define REG_SCALESHIFT 3 | |
565 | #endif | 565 | #endif | |
566 | 566 | |||
567 | #if (MIPS1 + MIPS2) > 0 | 567 | #if (MIPS1 + MIPS2) > 0 | |
568 | #define NOP_L nop | 568 | #define NOP_L nop | |
569 | #else | 569 | #else | |
570 | #define NOP_L /* nothing */ | 570 | #define NOP_L /* nothing */ | |
571 | #endif | 571 | #endif | |
572 | 572 | |||
573 | /* compiler define */ | 573 | /* compiler define */ | |
574 | #if defined(__OCTEON__) | 574 | #if defined(__OCTEON__) | |
575 | /* early cnMIPS have erratum which means 2 */ | 575 | /* early cnMIPS have erratum which means 2 */ | |
576 | #define LLSCSYNC sync 4; sync 4 | 576 | #define LLSCSYNC sync 4; sync 4 | |
577 | #define SYNC sync 4 /* sync 4 == syncw - sync all writes */ | 577 | #define SYNC sync 4 /* sync 4 == syncw - sync all writes */ | |
578 | #define BDSYNC sync 4 /* sync 4 == syncw - sync all writes */ | 578 | #define BDSYNC sync 4 /* sync 4 == syncw - sync all writes */ | |
579 | #define BDSYNC_ACQ sync | |||
580 | #define SYNC_ACQ sync | |||
581 | #define SYNC_REL sync | |||
582 | #define BDSYNC_PLUNGER sync 4 | |||
583 | #define SYNC_PLUNGER sync 4 | |||
579 | #elif __mips >= 3 || !defined(__mips_o32) | 584 | #elif __mips >= 3 || !defined(__mips_o32) | |
580 | #define LLSCSYNC sync | 585 | #define LLSCSYNC sync | |
581 | #define SYNC sync | 586 | #define SYNC sync | |
582 | #define BDSYNC sync | 587 | #define BDSYNC sync | |
588 | #define BDSYNC_ACQ sync | |||
589 | #define SYNC_ACQ sync | |||
590 | #define SYNC_REL sync | |||
591 | #define BDSYNC_PLUNGER nop | |||
592 | #define SYNC_PLUNGER /* nothing */ | |||
583 | #else | 593 | #else | |
584 | #define LLSCSYNC /* nothing */ | 594 | #define LLSCSYNC /* nothing */ | |
585 | #define SYNC /* nothing */ | 595 | #define SYNC /* nothing */ | |
586 | #define BDSYNC nop | 596 | #define BDSYNC nop | |
597 | #define BDSYNC_ACQ nop | |||
598 | #define SYNC_ACQ /* nothing */ | |||
599 | #define SYNC_REL /* nothing */ | |||
600 | #define BDSYNC_PLUNGER nop | |||
601 | #define SYNC_PLUNGER /* nothing */ | |||
587 | #endif | 602 | #endif | |
588 | 603 | |||
589 | /* CPU dependent hook for cp0 load delays */ | 604 | /* CPU dependent hook for cp0 load delays */ | |
590 | #if defined(MIPS1) || defined(MIPS2) || defined(MIPS3) | 605 | #if defined(MIPS1) || defined(MIPS2) || defined(MIPS3) | |
591 | #define MFC0_HAZARD sll $0,$0,1 /* super scalar nop */ | 606 | #define MFC0_HAZARD sll $0,$0,1 /* super scalar nop */ | |
592 | #else | 607 | #else | |
593 | #define MFC0_HAZARD /* nothing */ | 608 | #define MFC0_HAZARD /* nothing */ | |
594 | #endif | 609 | #endif | |
595 | 610 | |||
596 | #if _MIPS_ISA == _MIPS_ISA_MIPS1 || _MIPS_ISA == _MIPS_ISA_MIPS2 || \ | 611 | #if _MIPS_ISA == _MIPS_ISA_MIPS1 || _MIPS_ISA == _MIPS_ISA_MIPS2 || \ | |
597 | _MIPS_ISA == _MIPS_ISA_MIPS32 | 612 | _MIPS_ISA == _MIPS_ISA_MIPS32 | |
598 | #define MFC0 mfc0 | 613 | #define MFC0 mfc0 | |
599 | #define MTC0 mtc0 | 614 | #define MTC0 mtc0 | |
600 | #endif | 615 | #endif | |
601 | #if _MIPS_ISA == _MIPS_ISA_MIPS3 || _MIPS_ISA == _MIPS_ISA_MIPS4 || \ | 616 | #if _MIPS_ISA == _MIPS_ISA_MIPS3 || _MIPS_ISA == _MIPS_ISA_MIPS4 || \ | |
602 | _MIPS_ISA == _MIPS_ISA_MIPS64 | 617 | _MIPS_ISA == _MIPS_ISA_MIPS64 | |
603 | #define MFC0 dmfc0 | 618 | #define MFC0 dmfc0 | |
604 | #define MTC0 dmtc0 | 619 | #define MTC0 dmtc0 | |
605 | #endif | 620 | #endif | |
606 | 621 | |||
607 | #if defined(__mips_o32) || defined(__mips_o64) | 622 | #if defined(__mips_o32) || defined(__mips_o64) | |
608 | 623 | |||
609 | #ifdef __mips_abicalls | 624 | #ifdef __mips_abicalls | |
610 | #define CPRESTORE(r) .cprestore r | 625 | #define CPRESTORE(r) .cprestore r | |
611 | #define CPLOAD(r) .cpload r | 626 | #define CPLOAD(r) .cpload r | |
612 | #else | 627 | #else | |
613 | #define CPRESTORE(r) /* not needed */ | 628 | #define CPRESTORE(r) /* not needed */ | |
614 | #define CPLOAD(r) /* not needed */ | 629 | #define CPLOAD(r) /* not needed */ | |
615 | #endif | 630 | #endif | |
616 | 631 | |||
617 | #define SETUP_GP \ | 632 | #define SETUP_GP \ | |
618 | .set push; \ | 633 | .set push; \ | |
619 | .set noreorder; \ | 634 | .set noreorder; \ | |
620 | .cpload t9; \ | 635 | .cpload t9; \ | |
621 | .set pop | 636 | .set pop | |
622 | #define SETUP_GPX(r) \ | 637 | #define SETUP_GPX(r) \ | |
623 | .set push; \ | 638 | .set push; \ | |
624 | .set noreorder; \ | 639 | .set noreorder; \ | |
625 | move r,ra; /* save old ra */ \ | 640 | move r,ra; /* save old ra */ \ | |
626 | bal 7f; \ | 641 | bal 7f; \ | |
627 | nop; \ | 642 | nop; \ | |
628 | 7: .cpload ra; \ | 643 | 7: .cpload ra; \ | |
629 | move ra,r; \ | 644 | move ra,r; \ | |
630 | .set pop | 645 | .set pop | |
631 | #define SETUP_GPX_L(r,lbl) \ | 646 | #define SETUP_GPX_L(r,lbl) \ | |
632 | .set push; \ | 647 | .set push; \ | |
633 | .set noreorder; \ | 648 | .set noreorder; \ | |
634 | move r,ra; /* save old ra */ \ | 649 | move r,ra; /* save old ra */ \ | |
635 | bal lbl; \ | 650 | bal lbl; \ | |
636 | nop; \ | 651 | nop; \ | |
637 | lbl: .cpload ra; \ | 652 | lbl: .cpload ra; \ | |
638 | move ra,r; \ | 653 | move ra,r; \ | |
639 | .set pop | 654 | .set pop | |
640 | #define SAVE_GP(x) .cprestore x | 655 | #define SAVE_GP(x) .cprestore x | |
641 | 656 | |||
642 | #define SETUP_GP64(a,b) /* n32/n64 specific */ | 657 | #define SETUP_GP64(a,b) /* n32/n64 specific */ | |
643 | #define SETUP_GP64_R(a,b) /* n32/n64 specific */ | 658 | #define SETUP_GP64_R(a,b) /* n32/n64 specific */ | |
644 | #define SETUP_GPX64(a,b) /* n32/n64 specific */ | 659 | #define SETUP_GPX64(a,b) /* n32/n64 specific */ | |
645 | #define SETUP_GPX64_L(a,b,c) /* n32/n64 specific */ | 660 | #define SETUP_GPX64_L(a,b,c) /* n32/n64 specific */ | |
646 | #define RESTORE_GP64 /* n32/n64 specific */ | 661 | #define RESTORE_GP64 /* n32/n64 specific */ | |
647 | #define USE_ALT_CP(a) /* n32/n64 specific */ | 662 | #define USE_ALT_CP(a) /* n32/n64 specific */ | |
648 | #endif /* __mips_o32 || __mips_o64 */ | 663 | #endif /* __mips_o32 || __mips_o64 */ | |
649 | 664 | |||
650 | #if defined(__mips_o32) || defined(__mips_o64) | 665 | #if defined(__mips_o32) || defined(__mips_o64) | |
651 | #define REG_PROLOGUE .set push | 666 | #define REG_PROLOGUE .set push | |
652 | #define REG_EPILOGUE .set pop | 667 | #define REG_EPILOGUE .set pop | |
653 | #endif | 668 | #endif | |
654 | #if defined(__mips_n32) || defined(__mips_n64) | 669 | #if defined(__mips_n32) || defined(__mips_n64) | |
655 | #define REG_PROLOGUE .set push ; .set mips3 | 670 | #define REG_PROLOGUE .set push ; .set mips3 | |
656 | #define REG_EPILOGUE .set pop | 671 | #define REG_EPILOGUE .set pop | |
657 | #endif | 672 | #endif | |
658 | 673 | |||
659 | #if defined(__mips_n32) || defined(__mips_n64) | 674 | #if defined(__mips_n32) || defined(__mips_n64) | |
660 | #define SETUP_GP /* o32 specific */ | 675 | #define SETUP_GP /* o32 specific */ | |
661 | #define SETUP_GPX(r) /* o32 specific */ | 676 | #define SETUP_GPX(r) /* o32 specific */ | |
662 | #define SETUP_GPX_L(r,lbl) /* o32 specific */ | 677 | #define SETUP_GPX_L(r,lbl) /* o32 specific */ | |
663 | #define SAVE_GP(x) /* o32 specific */ | 678 | #define SAVE_GP(x) /* o32 specific */ | |
664 | #define SETUP_GP64(a,b) .cpsetup t9, a, b | 679 | #define SETUP_GP64(a,b) .cpsetup t9, a, b | |
665 | #define SETUP_GPX64(a,b) \ | 680 | #define SETUP_GPX64(a,b) \ | |
666 | .set push; \ | 681 | .set push; \ | |
667 | move b,ra; \ | 682 | move b,ra; \ | |
668 | .set noreorder; \ | 683 | .set noreorder; \ | |
669 | bal 7f; \ | 684 | bal 7f; \ | |
670 | nop; \ | 685 | nop; \ | |
671 | 7: .set pop; \ | 686 | 7: .set pop; \ | |
672 | .cpsetup ra, a, 7b; \ | 687 | .cpsetup ra, a, 7b; \ | |
673 | move ra,b | 688 | move ra,b | |
674 | #define SETUP_GPX64_L(a,b,c) \ | 689 | #define SETUP_GPX64_L(a,b,c) \ | |
675 | .set push; \ | 690 | .set push; \ | |
676 | move b,ra; \ | 691 | move b,ra; \ | |
677 | .set noreorder; \ | 692 | .set noreorder; \ | |
678 | bal c; \ | 693 | bal c; \ | |
679 | nop; \ | 694 | nop; \ | |
680 | c: .set pop; \ | 695 | c: .set pop; \ | |
681 | .cpsetup ra, a, c; \ | 696 | .cpsetup ra, a, c; \ | |
682 | move ra,b | 697 | move ra,b | |
683 | #define RESTORE_GP64 .cpreturn | 698 | #define RESTORE_GP64 .cpreturn | |
684 | #define USE_ALT_CP(a) .cplocal a | 699 | #define USE_ALT_CP(a) .cplocal a | |
685 | #endif /* __mips_n32 || __mips_n64 */ | 700 | #endif /* __mips_n32 || __mips_n64 */ | |
686 | 701 | |||
687 | /* | 702 | /* | |
688 | * The DYNAMIC_STATUS_MASK option adds an additional masking operation | 703 | * The DYNAMIC_STATUS_MASK option adds an additional masking operation | |
689 | * when updating the hardware interrupt mask in the status register. | 704 | * when updating the hardware interrupt mask in the status register. | |
690 | * | 705 | * | |
691 | * This is useful for platforms that need to at run-time mask | 706 | * This is useful for platforms that need to at run-time mask | |
692 | * interrupts based on motherboard configuration or to handle | 707 | * interrupts based on motherboard configuration or to handle | |
693 | * slowly clearing interrupts. | 708 | * slowly clearing interrupts. | |
694 | * | 709 | * | |
695 | * XXX this is only currently implemented for mips3. | 710 | * XXX this is only currently implemented for mips3. | |
696 | */ | 711 | */ | |
697 | #ifdef MIPS_DYNAMIC_STATUS_MASK | 712 | #ifdef MIPS_DYNAMIC_STATUS_MASK | |
698 | #define DYNAMIC_STATUS_MASK(sr,scratch) \ | 713 | #define DYNAMIC_STATUS_MASK(sr,scratch) \ | |
699 | lw scratch, mips_dynamic_status_mask; \ | 714 | lw scratch, mips_dynamic_status_mask; \ | |
700 | and sr, sr, scratch | 715 | and sr, sr, scratch | |
701 | 716 | |||
702 | #define DYNAMIC_STATUS_MASK_TOUSER(sr,scratch1) \ | 717 | #define DYNAMIC_STATUS_MASK_TOUSER(sr,scratch1) \ | |
703 | ori sr, (MIPS_INT_MASK | MIPS_SR_INT_IE); \ | 718 | ori sr, (MIPS_INT_MASK | MIPS_SR_INT_IE); \ | |
704 | DYNAMIC_STATUS_MASK(sr,scratch1) | 719 | DYNAMIC_STATUS_MASK(sr,scratch1) | |
705 | #else | 720 | #else | |
706 | #define DYNAMIC_STATUS_MASK(sr,scratch) | 721 | #define DYNAMIC_STATUS_MASK(sr,scratch) | |
707 | #define DYNAMIC_STATUS_MASK_TOUSER(sr,scratch1) | 722 | #define DYNAMIC_STATUS_MASK_TOUSER(sr,scratch1) | |
708 | #endif | 723 | #endif | |
709 | 724 | |||
710 | /* See lock_stubs.S. */ | 725 | /* See lock_stubs.S. */ | |
711 | #define LOG2_MIPS_LOCK_RAS_SIZE 8 | 726 | #define LOG2_MIPS_LOCK_RAS_SIZE 8 | |
712 | #define MIPS_LOCK_RAS_SIZE 256 /* 16 bytes left over */ | 727 | #define MIPS_LOCK_RAS_SIZE 256 /* 16 bytes left over */ | |
713 | 728 | |||
714 | #define CPUVAR(off) _C_LABEL(cpu_info_store)+__CONCAT(CPU_INFO_,off) | 729 | #define CPUVAR(off) _C_LABEL(cpu_info_store)+__CONCAT(CPU_INFO_,off) | |
715 | 730 | |||
716 | #endif /* _MIPS_ASM_H */ | 731 | #endif /* _MIPS_ASM_H */ |
--- src/sys/arch/mips/mips/lock_stubs_llsc.S 2022/02/27 19:21:44 1.14
+++ src/sys/arch/mips/mips/lock_stubs_llsc.S 2022/02/27 19:21:53 1.15
@@ -1,363 +1,378 @@ | @@ -1,363 +1,378 @@ | |||
1 | /* $NetBSD: lock_stubs_llsc.S,v 1.14 2022/02/27 19:21:44 riastradh Exp $ */ | 1 | /* $NetBSD: lock_stubs_llsc.S,v 1.15 2022/02/27 19:21:53 riastradh Exp $ */ | |
2 | 2 | |||
3 | /*- | 3 | /*- | |
4 | * Copyright (c) 2007 The NetBSD Foundation, Inc. | 4 | * Copyright (c) 2007 The NetBSD Foundation, Inc. | |
5 | * All rights reserved. | 5 | * All rights reserved. | |
6 | * | 6 | * | |
7 | * This code is derived from software contributed to The NetBSD Foundation | 7 | * This code is derived from software contributed to The NetBSD Foundation | |
8 | * by Andrew Doran. | 8 | * by Andrew Doran. | |
9 | * | 9 | * | |
10 | * Redistribution and use in source and binary forms, with or without | 10 | * Redistribution and use in source and binary forms, with or without | |
11 | * modification, are permitted provided that the following conditions | 11 | * modification, are permitted provided that the following conditions | |
12 | * are met: | 12 | * are met: | |
13 | * 1. Redistributions of source code must retain the above copyright | 13 | * 1. Redistributions of source code must retain the above copyright | |
14 | * notice, this list of conditions and the following disclaimer. | 14 | * notice, this list of conditions and the following disclaimer. | |
15 | * 2. Redistributions in binary form must reproduce the above copyright | 15 | * 2. Redistributions in binary form must reproduce the above copyright | |
16 | * notice, this list of conditions and the following disclaimer in the | 16 | * notice, this list of conditions and the following disclaimer in the | |
17 | * documentation and/or other materials provided with the distribution. | 17 | * documentation and/or other materials provided with the distribution. | |
18 | * | 18 | * | |
19 | * THIS SOFTWARE IS PROVIDED BY THE NETBSD FOUNDATION, INC. AND CONTRIBUTORS | 19 | * THIS SOFTWARE IS PROVIDED BY THE NETBSD FOUNDATION, INC. AND CONTRIBUTORS | |
20 | * ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED | 20 | * ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED | |
21 | * TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR | 21 | * TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR | |
22 | * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS | 22 | * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS | |
23 | * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR | 23 | * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR | |
24 | * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF | 24 | * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF | |
25 | * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS | 25 | * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS | |
26 | * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN | 26 | * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN | |
27 | * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) | 27 | * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) | |
28 | * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE | 28 | * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE | |
29 | * POSSIBILITY OF SUCH DAMAGE. | 29 | * POSSIBILITY OF SUCH DAMAGE. | |
30 | */ | 30 | */ | |
31 | 31 | |||
32 | #include "opt_cputype.h" | 32 | #include "opt_cputype.h" | |
33 | #include "opt_lockdebug.h" | 33 | #include "opt_lockdebug.h" | |
34 | 34 | |||
35 | #include <sys/errno.h> | 35 | #include <sys/errno.h> | |
36 | 36 | |||
37 | #include <machine/asm.h> | 37 | #include <machine/asm.h> | |
38 | 38 | |||
39 | RCSID("$NetBSD: lock_stubs_llsc.S,v 1.14 2022/02/27 19:21:44 riastradh Exp $") | 39 | RCSID("$NetBSD: lock_stubs_llsc.S,v 1.15 2022/02/27 19:21:53 riastradh Exp $") | |
40 | 40 | |||
41 | #include "assym.h" | 41 | #include "assym.h" | |
42 | 42 | |||
43 | /* | 43 | /* | |
44 | * Set ISA level for the assembler. | 44 | * Set ISA level for the assembler. | |
45 | * XXX Clean up with a macro? Same code fragment is in mipsX_subr.S too. | 45 | * XXX Clean up with a macro? Same code fragment is in mipsX_subr.S too. | |
46 | * XXX Key off build abi instead of processor type? | 46 | * XXX Key off build abi instead of processor type? | |
47 | */ | 47 | */ | |
48 | #if defined(MIPS3) | 48 | #if defined(MIPS3) | |
49 | .set mips3 | 49 | .set mips3 | |
50 | #endif | 50 | #endif | |
51 | 51 | |||
52 | #if defined(MIPS32) | 52 | #if defined(MIPS32) | |
53 | .set mips32 | 53 | .set mips32 | |
54 | #endif | 54 | #endif | |
55 | 55 | |||
56 | #if defined(MIPS64) | 56 | #if defined(MIPS64) | |
57 | .set mips64 | 57 | .set mips64 | |
58 | #endif | 58 | #endif | |
59 | 59 | |||
60 | .set noreorder | 60 | .set noreorder | |
61 | .set noat | 61 | .set noat | |
62 | 62 | |||
63 | /* | 63 | /* | |
64 | * unsigned long atomic_cas_ulong_llsc(volatile unsigned long *val, | 64 | * unsigned long atomic_cas_ulong_llsc(volatile unsigned long *val, | |
65 | * unsigned long old, unsigned long new); | 65 | * unsigned long old, unsigned long new); | |
66 | * | |||
67 | * For hysterical raisins in sys/arch/mips/include/lock.h, success | |||
68 | * implies load-acquire. The SYNC_ACQ here could be moved there | |||
69 | * instead. | |||
66 | */ | 70 | */ | |
67 | STATIC_LEAF(llsc_atomic_cas_ulong) | 71 | STATIC_LEAF(llsc_atomic_cas_ulong) | |
68 | LLSCSYNC | 72 | LLSCSYNC | |
69 | 1: | 73 | 1: | |
70 | LONG_LL t0, (a0) | 74 | LONG_LL t0, (a0) | |
71 | bne t0, a1, 2f | 75 | bne t0, a1, 2f | |
72 | move t1, a2 | 76 | move t1, a2 | |
73 | LONG_SC t1, (a0) | 77 | LONG_SC t1, (a0) | |
74 | beqz t1, 1b | 78 | beqz t1, 1b | |
75 | nop | 79 | nop | |
76 | SYNC | 80 | SYNC_ACQ | |
77 | j ra | 81 | j ra | |
78 | move v0, a1 | 82 | move v0, a1 | |
79 | 2: | 83 | 2: | |
80 | j ra | 84 | j ra | |
81 | move v0, t0 | 85 | move v0, t0 | |
82 | END(llsc_atomic_cas_ulong) | 86 | END(llsc_atomic_cas_ulong) | |
83 | 87 | |||
84 | /* | 88 | /* | |
85 | * unsigned int _atomic_cas_uint_llsc(volatile unsigned int *val, | 89 | * unsigned int _atomic_cas_uint_llsc(volatile unsigned int *val, | |
86 | * unsigned int old, unsigned int new); | 90 | * unsigned int old, unsigned int new); | |
91 | * | |||
92 | * For hysterical raisins in sys/arch/mips/include/lock.h, success | |||
93 | * implies load-acquire. The SYNC_ACQ here could be moved there | |||
94 | * instead. | |||
87 | */ | 95 | */ | |
88 | STATIC_LEAF(llsc_atomic_cas_uint) | 96 | STATIC_LEAF(llsc_atomic_cas_uint) | |
89 | LLSCSYNC | 97 | LLSCSYNC | |
90 | 1: | 98 | 1: | |
91 | INT_LL t0, (a0) | 99 | INT_LL t0, (a0) | |
92 | bne t0, a1, 2f | 100 | bne t0, a1, 2f | |
93 | move t1, a2 | 101 | move t1, a2 | |
94 | INT_SC t1, (a0) | 102 | INT_SC t1, (a0) | |
95 | beqz t1, 1b | 103 | beqz t1, 1b | |
96 | nop | 104 | nop | |
97 | SYNC | 105 | SYNC_ACQ | |
98 | j ra | 106 | j ra | |
99 | move v0, a1 | 107 | move v0, a1 | |
100 | 2: | 108 | 2: | |
101 | j ra | 109 | j ra | |
102 | move v0, t0 | 110 | move v0, t0 | |
103 | END(llsc_atomic_cas_uint) | 111 | END(llsc_atomic_cas_uint) | |
104 | 112 | |||
105 | /* | 113 | /* | |
106 | * int llsc_ucas_32(volatile uint32_t *ptr, uint32_t old, | 114 | * int llsc_ucas_32(volatile uint32_t *ptr, uint32_t old, | |
107 | * uint32_t new, uint32_t *ret) | 115 | * uint32_t new, uint32_t *ret) | |
116 | * | |||
117 | * Implies release/acquire barriers until someone tells me | |||
118 | * otherwise about _ucas_32/64. | |||
108 | */ | 119 | */ | |
109 | STATIC_LEAF(llsc_ucas_32) | 120 | STATIC_LEAF(llsc_ucas_32) | |
110 | .set at | 121 | .set at | |
111 | PTR_LA v0, _C_LABEL(llsc_ucaserr) | 122 | PTR_LA v0, _C_LABEL(llsc_ucaserr) | |
112 | .set noat | 123 | .set noat | |
113 | PTR_L v1, L_PCB(MIPS_CURLWP) | 124 | PTR_L v1, L_PCB(MIPS_CURLWP) | |
114 | PTR_S v0, PCB_ONFAULT(v1) | 125 | PTR_S v0, PCB_ONFAULT(v1) | |
115 | bltz a0, _C_LABEL(llsc_ucaserr) | 126 | bltz a0, _C_LABEL(llsc_ucaserr) | |
116 | nop | 127 | nop | |
117 | move v0, zero | 128 | move v0, zero | |
129 | SYNC_REL | |||
118 | 130 | |||
119 | LLSCSYNC | 131 | LLSCSYNC | |
120 | 1: ll t0, 0(a0) | 132 | 1: ll t0, 0(a0) | |
121 | bne t0, a1, 2f | 133 | bne t0, a1, 2f | |
122 | move t1, a2 | 134 | move t1, a2 | |
123 | sc t1, 0(a0) | 135 | sc t1, 0(a0) | |
124 | beqz t1, 1b | 136 | beqz t1, 1b | |
125 | nop | 137 | nop | |
126 | SYNC | 138 | SYNC_ACQ | |
127 | 139 | |||
128 | 2: PTR_S zero, PCB_ONFAULT(v1) | 140 | 2: PTR_S zero, PCB_ONFAULT(v1) | |
129 | j ra | 141 | j ra | |
130 | sw t0, 0(a3) | 142 | sw t0, 0(a3) | |
131 | END(llsc_ucas_32) | 143 | END(llsc_ucas_32) | |
132 | 144 | |||
133 | #ifdef _LP64 | 145 | #ifdef _LP64 | |
134 | /* | 146 | /* | |
135 | * int llsc_ucas_64(volatile uint64_t *ptr, uint64_t old, | 147 | * int llsc_ucas_64(volatile uint64_t *ptr, uint64_t old, | |
136 | * uint64_t new, uint64_t *ret) | 148 | * uint64_t new, uint64_t *ret) | |
137 | */ | 149 | */ | |
138 | STATIC_LEAF(llsc_ucas_64) | 150 | STATIC_LEAF(llsc_ucas_64) | |
139 | .set at | 151 | .set at | |
140 | PTR_LA v0, _C_LABEL(llsc_ucaserr) | 152 | PTR_LA v0, _C_LABEL(llsc_ucaserr) | |
141 | .set noat | 153 | .set noat | |
142 | PTR_L v1, L_PCB(MIPS_CURLWP) | 154 | PTR_L v1, L_PCB(MIPS_CURLWP) | |
143 | PTR_S v0, PCB_ONFAULT(v1) | 155 | PTR_S v0, PCB_ONFAULT(v1) | |
144 | bltz a0, _C_LABEL(llsc_ucaserr) | 156 | bltz a0, _C_LABEL(llsc_ucaserr) | |
145 | nop | 157 | nop | |
146 | move v0, zero | 158 | move v0, zero | |
159 | SYNC_REL | |||
147 | 160 | |||
148 | LLSCSYNC | 161 | LLSCSYNC | |
149 | 1: lld t0, 0(a0) | 162 | 1: lld t0, 0(a0) | |
150 | bne t0, a1, 2f | 163 | bne t0, a1, 2f | |
151 | move t1, a2 | 164 | move t1, a2 | |
152 | scd t1, 0(a0) | 165 | scd t1, 0(a0) | |
153 | beqz t1, 1b | 166 | beqz t1, 1b | |
154 | nop | 167 | nop | |
155 | SYNC | 168 | SYNC_ACQ | |
156 | 169 | |||
157 | 2: PTR_S zero, PCB_ONFAULT(v1) | 170 | 2: PTR_S zero, PCB_ONFAULT(v1) | |
158 | j ra | 171 | j ra | |
159 | sd t0, 0(a3) | 172 | sd t0, 0(a3) | |
160 | END(llsc_ucas_64) | 173 | END(llsc_ucas_64) | |
161 | #endif /* _LP64 */ | 174 | #endif /* _LP64 */ | |
162 | 175 | |||
163 | STATIC_LEAF_NOPROFILE(llsc_ucaserr) | 176 | STATIC_LEAF_NOPROFILE(llsc_ucaserr) | |
164 | PTR_S zero, PCB_ONFAULT(v1) # reset fault handler | 177 | PTR_S zero, PCB_ONFAULT(v1) # reset fault handler | |
165 | j ra | 178 | j ra | |
166 | li v0, EFAULT # return EFAULT on error | 179 | li v0, EFAULT # return EFAULT on error | |
167 | END(llsc_ucaserr) | 180 | END(llsc_ucaserr) | |
168 | 181 | |||
169 | #ifndef LOCKDEBUG | 182 | #ifndef LOCKDEBUG | |
170 | 183 | |||
171 | /* | 184 | /* | |
172 | * void mutex_enter(kmutex_t *mtx); | 185 | * void mutex_enter(kmutex_t *mtx); | |
173 | */ | 186 | */ | |
174 | STATIC_LEAF(llsc_mutex_enter) | 187 | STATIC_LEAF(llsc_mutex_enter) | |
175 | LLSCSYNC | 188 | LLSCSYNC | |
176 | PTR_LL t0, MTX_OWNER(a0) | 189 | PTR_LL t0, MTX_OWNER(a0) | |
177 | 1: | 190 | 1: | |
178 | bnez t0, 2f | 191 | bnez t0, 2f | |
179 | move t2, MIPS_CURLWP | 192 | move t2, MIPS_CURLWP | |
180 | PTR_SC t2, MTX_OWNER(a0) | 193 | PTR_SC t2, MTX_OWNER(a0) | |
181 | beqz t2, 1b | 194 | beqz t2, 1b | |
182 | PTR_LL t0, MTX_OWNER(a0) | 195 | PTR_LL t0, MTX_OWNER(a0) | |
183 | j ra | 196 | j ra | |
184 | BDSYNC | 197 | BDSYNC_ACQ | |
185 | 2: | 198 | 2: | |
186 | j _C_LABEL(mutex_vector_enter) | 199 | j _C_LABEL(mutex_vector_enter) | |
187 | nop | 200 | nop | |
188 | END(llsc_mutex_enter) | 201 | END(llsc_mutex_enter) | |
189 | 202 | |||
190 | /* | 203 | /* | |
191 | * void mutex_exit(kmutex_t *mtx); | 204 | * void mutex_exit(kmutex_t *mtx); | |
192 | */ | 205 | */ | |
193 | STATIC_LEAF(llsc_mutex_exit) | 206 | STATIC_LEAF(llsc_mutex_exit) | |
207 | SYNC_REL | |||
194 | LLSCSYNC | 208 | LLSCSYNC | |
195 | PTR_LL t0, MTX_OWNER(a0) | 209 | PTR_LL t0, MTX_OWNER(a0) | |
196 | SYNC | 210 | SYNC | |
197 | 1: | 211 | 1: | |
198 | bne t0, MIPS_CURLWP, 2f | 212 | bne t0, MIPS_CURLWP, 2f | |
199 | move t2, zero | 213 | move t2, zero | |
200 | PTR_SC t2, MTX_OWNER(a0) | 214 | PTR_SC t2, MTX_OWNER(a0) | |
201 | beqz t2, 1b | 215 | beqz t2, 1b | |
202 | PTR_LL t0, MTX_OWNER(a0) | 216 | PTR_LL t0, MTX_OWNER(a0) | |
203 | j ra | 217 | j ra | |
204 | BDSYNC | 218 | BDSYNC_PLUNGER | |
205 | 2: | 219 | 2: | |
206 | j _C_LABEL(mutex_vector_exit) | 220 | j _C_LABEL(mutex_vector_exit) | |
207 | nop | 221 | nop | |
208 | END(llsc_mutex_exit) | 222 | END(llsc_mutex_exit) | |
209 | 223 | |||
210 | /* | 224 | /* | |
211 | * void mutex_spin_enter(kmutex_t *mtx); | 225 | * void mutex_spin_enter(kmutex_t *mtx); | |
212 | */ | 226 | */ | |
213 | STATIC_NESTED(llsc_mutex_spin_enter, CALLFRAME_SIZ, ra) | 227 | STATIC_NESTED(llsc_mutex_spin_enter, CALLFRAME_SIZ, ra) | |
214 | move t0, a0 | 228 | move t0, a0 | |
215 | PTR_L t2, L_CPU(MIPS_CURLWP) | 229 | PTR_L t2, L_CPU(MIPS_CURLWP) | |
216 | INT_L a0, MTX_IPL(t0) | 230 | INT_L a0, MTX_IPL(t0) | |
217 | #ifdef PARANOIA | 231 | #ifdef PARANOIA | |
218 | INT_L ta1, CPU_INFO_CPL(t2) | 232 | INT_L ta1, CPU_INFO_CPL(t2) | |
219 | #endif | 233 | #endif | |
220 | 234 | |||
221 | /* | 235 | /* | |
222 | * We need to raise our IPL. But it means calling another routine | 236 | * We need to raise our IPL. But it means calling another routine | |
223 | * but it's written to have little overhead. call splraise | 237 | * but it's written to have little overhead. call splraise | |
224 | * (only uses a0-a3 and v0-v1) | 238 | * (only uses a0-a3 and v0-v1) | |
225 | */ | 239 | */ | |
226 | move t3, ra # need to save ra | 240 | move t3, ra # need to save ra | |
227 | jal _C_LABEL(splraise) | 241 | jal _C_LABEL(splraise) | |
228 | nop | 242 | nop | |
229 | move ra, t3 # move ra back | 243 | move ra, t3 # move ra back | |
230 | #ifdef PARANOIA | 244 | #ifdef PARANOIA | |
231 | 10: bne ta1, v0, 10b # loop forever if v0 != ta1 | 245 | 10: bne ta1, v0, 10b # loop forever if v0 != ta1 | |
232 | nop | 246 | nop | |
233 | #endif /* PARANOIA */ | 247 | #endif /* PARANOIA */ | |
234 | 248 | |||
235 | /* | 249 | /* | |
236 | * If this is the first lock of the mutex, store the previous IPL for | 250 | * If this is the first lock of the mutex, store the previous IPL for | |
237 | * exit. Even if an interrupt happens, the mutex count will not change. | 251 | * exit. Even if an interrupt happens, the mutex count will not change. | |
238 | */ | 252 | */ | |
239 | 1: | 253 | 1: | |
240 | INT_L ta2, CPU_INFO_MTX_COUNT(t2) | 254 | INT_L ta2, CPU_INFO_MTX_COUNT(t2) | |
241 | INT_ADDU ta3, ta2, -1 | 255 | INT_ADDU ta3, ta2, -1 | |
242 | INT_S ta3, CPU_INFO_MTX_COUNT(t2) | 256 | INT_S ta3, CPU_INFO_MTX_COUNT(t2) | |
243 | bltz ta2, 2f | 257 | bltz ta2, 2f | |
244 | nop | 258 | nop | |
245 | INT_S v0, CPU_INFO_MTX_OLDSPL(t2) /* returned by splraise */ | 259 | INT_S v0, CPU_INFO_MTX_OLDSPL(t2) /* returned by splraise */ | |
246 | 2: | 260 | 2: | |
247 | #ifdef PARANOIA | 261 | #ifdef PARANOIA | |
248 | INT_L ta1, CPU_INFO_MTX_OLDSPL(t2) | 262 | INT_L ta1, CPU_INFO_MTX_OLDSPL(t2) | |
249 | INT_L ta2, CPU_INFO_CPL(t2) # get updated CPL | 263 | INT_L ta2, CPU_INFO_CPL(t2) # get updated CPL | |
250 | sltu v0, ta2, ta0 # v0 = cpl < mtx_ipl | 264 | sltu v0, ta2, ta0 # v0 = cpl < mtx_ipl | |
251 | sltu v1, ta2, ta1 # v1 = cpl < oldspl | 265 | sltu v1, ta2, ta1 # v1 = cpl < oldspl | |
252 | sll v0, 1 | 266 | sll v0, 1 | |
253 | or v0, v1 | 267 | or v0, v1 | |
254 | 12: bnez v0, 12b # loop forever if any are true | 268 | 12: bnez v0, 12b # loop forever if any are true | |
255 | nop | 269 | nop | |
256 | #endif /* PARANOIA */ | 270 | #endif /* PARANOIA */ | |
257 | 271 | |||
258 | LLSCSYNC | 272 | LLSCSYNC | |
259 | INT_LL t3, MTX_LOCK(t0) | 273 | INT_LL t3, MTX_LOCK(t0) | |
260 | 3: | 274 | 3: | |
261 | bnez t3, 4f | 275 | bnez t3, 4f | |
262 | li t1, 1 | 276 | li t1, 1 | |
263 | INT_SC t1, MTX_LOCK(t0) | 277 | INT_SC t1, MTX_LOCK(t0) | |
264 | beqz t1, 3b | 278 | beqz t1, 3b | |
265 | INT_LL t3, MTX_LOCK(t0) | 279 | INT_LL t3, MTX_LOCK(t0) | |
266 | j ra | 280 | j ra | |
267 | BDSYNC | 281 | BDSYNC_ACQ | |
268 | 4: | 282 | 4: | |
269 | j _C_LABEL(mutex_spin_retry) | 283 | j _C_LABEL(mutex_spin_retry) | |
270 | move a0, t0 | 284 | move a0, t0 | |
271 | END(llsc_mutex_spin_enter) | 285 | END(llsc_mutex_spin_enter) | |
272 | 286 | |||
273 | /* | 287 | /* | |
274 | * void mutex_spin_exit(kmutex_t *mtx); | 288 | * void mutex_spin_exit(kmutex_t *mtx); | |
275 | */ | 289 | */ | |
276 | LEAF(llsc_mutex_spin_exit) | 290 | LEAF(llsc_mutex_spin_exit) | |
291 | SYNC_REL | |||
277 | PTR_L t2, L_CPU(MIPS_CURLWP) | 292 | PTR_L t2, L_CPU(MIPS_CURLWP) | |
278 | #if defined(DIAGNOSTIC) | 293 | #if defined(DIAGNOSTIC) | |
279 | INT_L t0, MTX_LOCK(a0) | 294 | INT_L t0, MTX_LOCK(a0) | |
280 | beqz t0, 2f | 295 | beqz t0, 2f | |
281 | nop | 296 | nop | |
282 | #endif | 297 | #endif | |
283 | INT_S zero, MTX_LOCK(a0) | 298 | INT_S zero, MTX_LOCK(a0) | |
284 | 299 | |||
285 | /* | 300 | /* | |
286 | * We need to grab this before the mutex count is incremented | 301 | * We need to grab this before the mutex count is incremented | |
287 | * because if we get an interrupt, it may see the count as zero | 302 | * because if we get an interrupt, it may see the count as zero | |
288 | * and overwrite the oldspl value with a bogus value. | 303 | * and overwrite the oldspl value with a bogus value. | |
289 | */ | 304 | */ | |
290 | #ifdef PARANOIA | 305 | #ifdef PARANOIA | |
291 | INT_L a2, MTX_IPL(a0) | 306 | INT_L a2, MTX_IPL(a0) | |
292 | #endif | 307 | #endif | |
293 | INT_L a0, CPU_INFO_MTX_OLDSPL(t2) | 308 | INT_L a0, CPU_INFO_MTX_OLDSPL(t2) | |
294 | 309 | |||
295 | /* | 310 | /* | |
296 | * Increment the mutex count | 311 | * Increment the mutex count | |
297 | */ | 312 | */ | |
298 | INT_L t0, CPU_INFO_MTX_COUNT(t2) | 313 | INT_L t0, CPU_INFO_MTX_COUNT(t2) | |
299 | INT_ADDU t0, t0, 1 | 314 | INT_ADDU t0, t0, 1 | |
300 | INT_S t0, CPU_INFO_MTX_COUNT(t2) | 315 | INT_S t0, CPU_INFO_MTX_COUNT(t2) | |
301 | 316 | |||
302 | /* | 317 | /* | |
303 | * If the IPL doesn't change, nothing to do | 318 | * If the IPL doesn't change, nothing to do | |
304 | */ | 319 | */ | |
305 | INT_L a1, CPU_INFO_CPL(t2) | 320 | INT_L a1, CPU_INFO_CPL(t2) | |
306 | 321 | |||
307 | #ifdef PARANOIA | 322 | #ifdef PARANOIA | |
308 | sltu v0, a1, a2 # v0 = cpl < mtx_ipl | 323 | sltu v0, a1, a2 # v0 = cpl < mtx_ipl | |
309 | sltu v1, a1, a0 # v1 = cpl < oldspl | 324 | sltu v1, a1, a0 # v1 = cpl < oldspl | |
310 | sll v0, 1 | 325 | sll v0, 1 | |
311 | or v0, v1 | 326 | or v0, v1 | |
312 | 12: bnez v0, 12b # loop forever if either is true | 327 | 12: bnez v0, 12b # loop forever if either is true | |
313 | nop | 328 | nop | |
314 | #endif /* PARANOIA */ | 329 | #endif /* PARANOIA */ | |
315 | 330 | |||
316 | beq a0, a1, 1f # if oldspl == cpl | 331 | beq a0, a1, 1f # if oldspl == cpl | |
317 | nop # no reason to drop ipl | 332 | nop # no reason to drop ipl | |
318 | 333 | |||
319 | bltz t0, 1f # there are still holders | 334 | bltz t0, 1f # there are still holders | |
320 | nop # so don't drop IPL | 335 | nop # so don't drop IPL | |
321 | 336 | |||
322 | /* | 337 | /* | |
323 | * Mutex count is zero so we need to restore the old IPL | 338 | * Mutex count is zero so we need to restore the old IPL | |
324 | */ | 339 | */ | |
325 | #ifdef PARANOIA | 340 | #ifdef PARANOIA | |
326 | sltiu v0, a0, IPL_HIGH+1 | 341 | sltiu v0, a0, IPL_HIGH+1 | |
327 | 13: beqz v0, 13b # loop forever if ipl > IPL_HIGH | 342 | 13: beqz v0, 13b # loop forever if ipl > IPL_HIGH | |
328 | nop | 343 | nop | |
329 | #endif | 344 | #endif | |
330 | j _C_LABEL(splx) | 345 | j _C_LABEL(splx) | |
331 | nop | 346 | nop | |
332 | 1: | 347 | 1: | |
333 | j ra | 348 | j ra | |
334 | nop | 349 | nop | |
335 | #if defined(DIAGNOSTIC) | 350 | #if defined(DIAGNOSTIC) | |
336 | 2: | 351 | 2: | |
337 | j _C_LABEL(mutex_vector_exit) | 352 | j _C_LABEL(mutex_vector_exit) | |
338 | nop | 353 | nop | |
339 | #endif | 354 | #endif | |
340 | END(llsc_mutex_spin_exit) | 355 | END(llsc_mutex_spin_exit) | |
341 | #endif /* !LOCKDEBUG */ | 356 | #endif /* !LOCKDEBUG */ | |
342 | 357 | |||
343 | .rdata | 358 | .rdata | |
344 | EXPORT_OBJECT(mips_llsc_locore_atomicvec) | 359 | EXPORT_OBJECT(mips_llsc_locore_atomicvec) | |
345 | PTR_WORD llsc_atomic_cas_uint | 360 | PTR_WORD llsc_atomic_cas_uint | |
346 | PTR_WORD llsc_atomic_cas_ulong | 361 | PTR_WORD llsc_atomic_cas_ulong | |
347 | PTR_WORD llsc_ucas_32 | 362 | PTR_WORD llsc_ucas_32 | |
348 | #ifdef _LP64 | 363 | #ifdef _LP64 | |
349 | PTR_WORD llsc_ucas_64 | 364 | PTR_WORD llsc_ucas_64 | |
350 | #else | 365 | #else | |
351 | 0 | 366 | 0 | |
352 | #endif /* _LP64 */ | 367 | #endif /* _LP64 */ | |
353 | #ifdef LOCKDEBUG | 368 | #ifdef LOCKDEBUG | |
354 | PTR_WORD mutex_vector_enter | 369 | PTR_WORD mutex_vector_enter | |
355 | PTR_WORD mutex_vector_exit | 370 | PTR_WORD mutex_vector_exit | |
356 | PTR_WORD mutex_vector_enter | 371 | PTR_WORD mutex_vector_enter | |
357 | PTR_WORD mutex_vector_exit | 372 | PTR_WORD mutex_vector_exit | |
358 | #else | 373 | #else | |
359 | PTR_WORD llsc_mutex_enter | 374 | PTR_WORD llsc_mutex_enter | |
360 | PTR_WORD llsc_mutex_exit | 375 | PTR_WORD llsc_mutex_exit | |
361 | PTR_WORD llsc_mutex_spin_enter | 376 | PTR_WORD llsc_mutex_spin_enter | |
362 | PTR_WORD llsc_mutex_spin_exit | 377 | PTR_WORD llsc_mutex_spin_exit | |
363 | #endif /* !LOCKDEBUG */ | 378 | #endif /* !LOCKDEBUG */ |