Thu Apr 18 15:51:36 2024 UTC (21d)
Pull up following revision(s) (requested by riastradh in ticket #1830):

	sys/kern/subr_workqueue.c: revision 1.40
	sys/kern/subr_workqueue.c: revision 1.41
	sys/kern/subr_workqueue.c: revision 1.42
	sys/kern/subr_workqueue.c: revision 1.43
	sys/kern/subr_workqueue.c: revision 1.44
	sys/kern/subr_workqueue.c: revision 1.45
	sys/kern/subr_workqueue.c: revision 1.46
	tests/rump/kernspace/workqueue.c: revision 1.7
	sys/kern/subr_workqueue.c: revision 1.47
	tests/rump/kernspace/workqueue.c: revision 1.8
	tests/rump/kernspace/workqueue.c: revision 1.9
	tests/rump/rumpkern/t_workqueue.c: revision 1.3
	tests/rump/rumpkern/t_workqueue.c: revision 1.4
	tests/rump/kernspace/kernspace.h: revision 1.9
	tests/rump/rumpkern/Makefile: revision 1.20
	sys/kern/subr_workqueue.c: revision 1.39
	share/man/man9/workqueue.9: revision 1.15
	(all via patch)

workqueue: Lift unnecessary restriction on workqueue_wait.

Allow multiple concurrent waits at a time, and allow enqueueing work
at the same time (as long as it's not the work we're waiting for).

This way multiple users can use a shared global workqueue and safely
wait for individual work items concurrently, while the workqueue is
still in use for other items (e.g., wg(4) peers).

This has the side effect of taking away a diagnostic measure, but I
think allowing the diagnostic's false positives instead of rejecting
them is worth it.  We could cheaply add it back with some false
negatives if it's important.
workqueue(9): workqueue_wait and workqueue_destroy may sleep.

But might not, so assert sleepable up front.
workqueue(9): Sprinkle dtrace probes.
tests/rump/rumpkern: Use PROGDPLIBS, not explicit -L/-l.

This way we relink the t_* test programs whenever changes under
tests/rump/kernspace change libkernspace.a.

workqueue(9) tests: Nix trailing whitespace.

workqueue(9) tests: Destroy struct work immediately on entry.

workqueue(9) tests: Add test for PR kern/57574.

workqueue(9): Avoid touching running work items in workqueue_wait.

As soon as the workqueue function has called, it is forbidden to
touch the struct work passed to it -- the function might free or
reuse the data structure it is embedded in.

So workqueue_wait is forbidden to search the queue for the batch of
running work items.  Instead, use a generation number which is odd
while the thread is processing a batch of work and even when not.
There's still a small optimization available with the struct work
pointer to wait for: if we find the work item in one of the per-CPU
_pending_ queues, then after we wait for a batch of work to complete
on that CPU, we don't need to wait for work on any other CPUs.
PR kern/57574

workqueue(9): Sprinkle dtrace probes for workqueue_wait edge cases.

Let's make it easy to find out whether these are hit.

workqueue(9): Stop violating queue(3) internals.

workqueue(9): Avoid unnecessary mutex_exit/enter cycle each loop.

workqueue(9): Sort includes.
No functional change intended.

workqueue(9): Factor out wq->wq_flags & WQ_FPU in workqueue_worker.
No functional change intended.  Makes it clearer that s is
initialized when used.


(martin)
diff -r1.12 -r1.12.6.1 src/share/man/man9/workqueue.9
diff -r1.37 -r1.37.6.1 src/sys/kern/subr_workqueue.c
diff -r1.8 -r1.8.2.1 src/tests/rump/kernspace/kernspace.h
diff -r1.6 -r1.6.8.1 src/tests/rump/kernspace/workqueue.c
diff -r1.18 -r1.18.2.1 src/tests/rump/rumpkern/Makefile
diff -r1.2 -r1.2.8.1 src/tests/rump/rumpkern/t_workqueue.c

cvs diff -r1.12 -r1.12.6.1 src/share/man/man9/workqueue.9 (expand / switch to unified diff)

--- src/share/man/man9/workqueue.9 2017/12/28 07:00:52 1.12
+++ src/share/man/man9/workqueue.9 2024/04/18 15:51:36 1.12.6.1
@@ -1,14 +1,14 @@ @@ -1,14 +1,14 @@
1.\" $NetBSD: workqueue.9,v 1.12 2017/12/28 07:00:52 ozaki-r Exp $ 1.\" $NetBSD: workqueue.9,v 1.12.6.1 2024/04/18 15:51:36 martin Exp $
2.\" 2.\"
3.\" Copyright (c)2005 YAMAMOTO Takashi, 3.\" Copyright (c)2005 YAMAMOTO Takashi,
4.\" All rights reserved. 4.\" All rights reserved.
5.\" 5.\"
6.\" Redistribution and use in source and binary forms, with or without 6.\" Redistribution and use in source and binary forms, with or without
7.\" modification, are permitted provided that the following conditions 7.\" modification, are permitted provided that the following conditions
8.\" are met: 8.\" are met:
9.\" 1. Redistributions of source code must retain the above copyright 9.\" 1. Redistributions of source code must retain the above copyright
10.\" notice, this list of conditions and the following disclaimer. 10.\" notice, this list of conditions and the following disclaimer.
11.\" 2. Redistributions in binary form must reproduce the above copyright 11.\" 2. Redistributions in binary form must reproduce the above copyright
12.\" notice, this list of conditions and the following disclaimer in the 12.\" notice, this list of conditions and the following disclaimer in the
13.\" documentation and/or other materials provided with the distribution. 13.\" documentation and/or other materials provided with the distribution.
14.\" 14.\"
@@ -118,31 +118,31 @@ must be @@ -118,31 +118,31 @@ must be
118The enqueued work will be processed in a thread context. 118The enqueued work will be processed in a thread context.
119A work must not be enqueued again until the callback is called by 119A work must not be enqueued again until the callback is called by
120the 120the
121.Nm 121.Nm
122framework. 122framework.
123.Pp 123.Pp
124.\" - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 124.\" - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
125.Fn workqueue_wait 125.Fn workqueue_wait
126waits for a specified work 126waits for a specified work
127.Fa wk 127.Fa wk
128on the workqueue 128on the workqueue
129.Fa wq 129.Fa wq
130to finish. 130to finish.
131The caller must ensure that no new work will be enqueued to the workqueue 131The caller must ensure that
132beforehand. 132.Fa wk
133Note that if the workqueue is 133will not be enqueued to the workqueue again until after
134.Dv WQ_PERCPU , 134.Fn workqueue_wait
135the caller can enqueue a new work to another queue other than the waiting queue. 135returns.
136.Pp 136.Pp
137.\" - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 137.\" - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
138.Fn workqueue_destroy 138.Fn workqueue_destroy
139destroys a workqueue and frees associated resources. 139destroys a workqueue and frees associated resources.
140The caller should ensure that the workqueue has no work enqueued beforehand. 140The caller should ensure that the workqueue has no work enqueued beforehand.
141.\" ------------------------------------------------------------ 141.\" ------------------------------------------------------------
142.Sh RETURN VALUES 142.Sh RETURN VALUES
143.Fn workqueue_create 143.Fn workqueue_create
144returns 0 on success. 144returns 0 on success.
145Otherwise, it returns an 145Otherwise, it returns an
146.Xr errno 2 . 146.Xr errno 2 .
147.\" ------------------------------------------------------------ 147.\" ------------------------------------------------------------
148.Sh CODE REFERENCES 148.Sh CODE REFERENCES

cvs diff -r1.37 -r1.37.6.1 src/sys/kern/subr_workqueue.c (expand / switch to unified diff)

--- src/sys/kern/subr_workqueue.c 2018/06/13 05:26:12 1.37
+++ src/sys/kern/subr_workqueue.c 2024/04/18 15:51:35 1.37.6.1
@@ -1,14 +1,14 @@ @@ -1,14 +1,14 @@
1/* $NetBSD: subr_workqueue.c,v 1.37 2018/06/13 05:26:12 ozaki-r Exp $ */ 1/* $NetBSD: subr_workqueue.c,v 1.37.6.1 2024/04/18 15:51:35 martin Exp $ */
2 2
3/*- 3/*-
4 * Copyright (c)2002, 2005, 2006, 2007 YAMAMOTO Takashi, 4 * Copyright (c)2002, 2005, 2006, 2007 YAMAMOTO Takashi,
5 * All rights reserved. 5 * All rights reserved.
6 * 6 *
7 * Redistribution and use in source and binary forms, with or without 7 * Redistribution and use in source and binary forms, with or without
8 * modification, are permitted provided that the following conditions 8 * modification, are permitted provided that the following conditions
9 * are met: 9 * are met:
10 * 1. Redistributions of source code must retain the above copyright 10 * 1. Redistributions of source code must retain the above copyright
11 * notice, this list of conditions and the following disclaimer. 11 * notice, this list of conditions and the following disclaimer.
12 * 2. Redistributions in binary form must reproduce the above copyright 12 * 2. Redistributions in binary form must reproduce the above copyright
13 * notice, this list of conditions and the following disclaimer in the 13 * notice, this list of conditions and the following disclaimer in the
14 * documentation and/or other materials provided with the distribution. 14 * documentation and/or other materials provided with the distribution.
@@ -17,69 +17,113 @@ @@ -17,69 +17,113 @@
17 * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE 17 * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
18 * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE 18 * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
19 * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE 19 * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
20 * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 20 * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
21 * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS 21 * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
22 * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) 22 * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
23 * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT 23 * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
24 * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY 24 * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
25 * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF 25 * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
26 * SUCH DAMAGE. 26 * SUCH DAMAGE.
27 */ 27 */
28 28
29#include <sys/cdefs.h> 29#include <sys/cdefs.h>
30__KERNEL_RCSID(0, "$NetBSD: subr_workqueue.c,v 1.37 2018/06/13 05:26:12 ozaki-r Exp $"); 30__KERNEL_RCSID(0, "$NetBSD: subr_workqueue.c,v 1.37.6.1 2024/04/18 15:51:35 martin Exp $");
31 31
32#include <sys/param.h> 32#include <sys/param.h>
 33
 34#include <sys/condvar.h>
33#include <sys/cpu.h> 35#include <sys/cpu.h>
34#include <sys/systm.h> 
35#include <sys/kthread.h> 
36#include <sys/kmem.h> 36#include <sys/kmem.h>
37#include <sys/proc.h> 37#include <sys/kthread.h>
38#include <sys/workqueue.h> 
39#include <sys/mutex.h> 38#include <sys/mutex.h>
40#include <sys/condvar.h> 39#include <sys/proc.h>
41#include <sys/queue.h> 40#include <sys/queue.h>
 41#include <sys/sdt.h>
 42#include <sys/systm.h>
 43#include <sys/workqueue.h>
42 44
43typedef struct work_impl { 45typedef struct work_impl {
44 SIMPLEQ_ENTRY(work_impl) wk_entry; 46 SIMPLEQ_ENTRY(work_impl) wk_entry;
45} work_impl_t; 47} work_impl_t;
46 48
47SIMPLEQ_HEAD(workqhead, work_impl); 49SIMPLEQ_HEAD(workqhead, work_impl);
48 50
49struct workqueue_queue { 51struct workqueue_queue {
50 kmutex_t q_mutex; 52 kmutex_t q_mutex;
51 kcondvar_t q_cv; 53 kcondvar_t q_cv;
52 struct workqhead q_queue_pending; 54 struct workqhead q_queue_pending;
53 struct workqhead q_queue_running; 55 uint64_t q_gen;
54 lwp_t *q_worker; 56 lwp_t *q_worker;
55 work_impl_t *q_waiter; 
56}; 57};
57 58
58struct workqueue { 59struct workqueue {
59 void (*wq_func)(struct work *, void *); 60 void (*wq_func)(struct work *, void *);
60 void *wq_arg; 61 void *wq_arg;
61 int wq_flags; 62 int wq_flags;
62 63
63 char wq_name[MAXCOMLEN]; 64 char wq_name[MAXCOMLEN];
64 pri_t wq_prio; 65 pri_t wq_prio;
65 void *wq_ptr; 66 void *wq_ptr;
66}; 67};
67 68
68#define WQ_SIZE (roundup2(sizeof(struct workqueue), coherency_unit)) 69#define WQ_SIZE (roundup2(sizeof(struct workqueue), coherency_unit))
69#define WQ_QUEUE_SIZE (roundup2(sizeof(struct workqueue_queue), coherency_unit)) 70#define WQ_QUEUE_SIZE (roundup2(sizeof(struct workqueue_queue), coherency_unit))
70 71
71#define POISON 0xaabbccdd 72#define POISON 0xaabbccdd
72 73
 74SDT_PROBE_DEFINE7(sdt, kernel, workqueue, create,
 75 "struct workqueue *"/*wq*/,
 76 "const char *"/*name*/,
 77 "void (*)(struct work *, void *)"/*func*/,
 78 "void *"/*arg*/,
 79 "pri_t"/*prio*/,
 80 "int"/*ipl*/,
 81 "int"/*flags*/);
 82SDT_PROBE_DEFINE1(sdt, kernel, workqueue, destroy,
 83 "struct workqueue *"/*wq*/);
 84
 85SDT_PROBE_DEFINE3(sdt, kernel, workqueue, enqueue,
 86 "struct workqueue *"/*wq*/,
 87 "struct work *"/*wk*/,
 88 "struct cpu_info *"/*ci*/);
 89SDT_PROBE_DEFINE4(sdt, kernel, workqueue, entry,
 90 "struct workqueue *"/*wq*/,
 91 "struct work *"/*wk*/,
 92 "void (*)(struct work *, void *)"/*func*/,
 93 "void *"/*arg*/);
 94SDT_PROBE_DEFINE4(sdt, kernel, workqueue, return,
 95 "struct workqueue *"/*wq*/,
 96 "struct work *"/*wk*/,
 97 "void (*)(struct work *, void *)"/*func*/,
 98 "void *"/*arg*/);
 99SDT_PROBE_DEFINE2(sdt, kernel, workqueue, wait__start,
 100 "struct workqueue *"/*wq*/,
 101 "struct work *"/*wk*/);
 102SDT_PROBE_DEFINE2(sdt, kernel, workqueue, wait__self,
 103 "struct workqueue *"/*wq*/,
 104 "struct work *"/*wk*/);
 105SDT_PROBE_DEFINE2(sdt, kernel, workqueue, wait__hit,
 106 "struct workqueue *"/*wq*/,
 107 "struct work *"/*wk*/);
 108SDT_PROBE_DEFINE2(sdt, kernel, workqueue, wait__done,
 109 "struct workqueue *"/*wq*/,
 110 "struct work *"/*wk*/);
 111
 112SDT_PROBE_DEFINE1(sdt, kernel, workqueue, exit__start,
 113 "struct workqueue *"/*wq*/);
 114SDT_PROBE_DEFINE1(sdt, kernel, workqueue, exit__done,
 115 "struct workqueue *"/*wq*/);
 116
73static size_t 117static size_t
74workqueue_size(int flags) 118workqueue_size(int flags)
75{ 119{
76 120
77 return WQ_SIZE 121 return WQ_SIZE
78 + ((flags & WQ_PERCPU) != 0 ? ncpu : 1) * WQ_QUEUE_SIZE 122 + ((flags & WQ_PERCPU) != 0 ? ncpu : 1) * WQ_QUEUE_SIZE
79 + coherency_unit; 123 + coherency_unit;
80} 124}
81 125
82static struct workqueue_queue * 126static struct workqueue_queue *
83workqueue_queue_lookup(struct workqueue *wq, struct cpu_info *ci) 127workqueue_queue_lookup(struct workqueue *wq, struct cpu_info *ci)
84{ 128{
85 u_int idx = 0; 129 u_int idx = 0;
@@ -87,70 +131,75 @@ workqueue_queue_lookup(struct workqueue  @@ -87,70 +131,75 @@ workqueue_queue_lookup(struct workqueue
87 if (wq->wq_flags & WQ_PERCPU) { 131 if (wq->wq_flags & WQ_PERCPU) {
88 idx = ci ? cpu_index(ci) : cpu_index(curcpu()); 132 idx = ci ? cpu_index(ci) : cpu_index(curcpu());
89 } 133 }
90 134
91 return (void *)((uintptr_t)(wq) + WQ_SIZE + (idx * WQ_QUEUE_SIZE)); 135 return (void *)((uintptr_t)(wq) + WQ_SIZE + (idx * WQ_QUEUE_SIZE));
92} 136}
93 137
94static void 138static void
95workqueue_runlist(struct workqueue *wq, struct workqhead *list) 139workqueue_runlist(struct workqueue *wq, struct workqhead *list)
96{ 140{
97 work_impl_t *wk; 141 work_impl_t *wk;
98 work_impl_t *next; 142 work_impl_t *next;
99 143
100 /* 
101 * note that "list" is not a complete SIMPLEQ. 
102 */ 
103 
104 for (wk = SIMPLEQ_FIRST(list); wk != NULL; wk = next) { 144 for (wk = SIMPLEQ_FIRST(list); wk != NULL; wk = next) {
105 next = SIMPLEQ_NEXT(wk, wk_entry); 145 next = SIMPLEQ_NEXT(wk, wk_entry);
 146 SDT_PROBE4(sdt, kernel, workqueue, entry,
 147 wq, wk, wq->wq_func, wq->wq_arg);
106 (*wq->wq_func)((void *)wk, wq->wq_arg); 148 (*wq->wq_func)((void *)wk, wq->wq_arg);
 149 SDT_PROBE4(sdt, kernel, workqueue, return,
 150 wq, wk, wq->wq_func, wq->wq_arg);
107 } 151 }
108} 152}
109 153
110static void 154static void
111workqueue_worker(void *cookie) 155workqueue_worker(void *cookie)
112{ 156{
113 struct workqueue *wq = cookie; 157 struct workqueue *wq = cookie;
114 struct workqueue_queue *q; 158 struct workqueue_queue *q;
115 159
116 /* find the workqueue of this kthread */ 160 /* find the workqueue of this kthread */
117 q = workqueue_queue_lookup(wq, curlwp->l_cpu); 161 q = workqueue_queue_lookup(wq, curlwp->l_cpu);
118 162
 163 mutex_enter(&q->q_mutex);
119 for (;;) { 164 for (;;) {
120 /* 165 struct workqhead tmp;
121 * we violate abstraction of SIMPLEQ. 166
122 */ 167 SIMPLEQ_INIT(&tmp);
123 168
124 mutex_enter(&q->q_mutex); 
125 while (SIMPLEQ_EMPTY(&q->q_queue_pending)) 169 while (SIMPLEQ_EMPTY(&q->q_queue_pending))
126 cv_wait(&q->q_cv, &q->q_mutex); 170 cv_wait(&q->q_cv, &q->q_mutex);
127 KASSERT(SIMPLEQ_EMPTY(&q->q_queue_running)); 171 SIMPLEQ_CONCAT(&tmp, &q->q_queue_pending);
128 q->q_queue_running.sqh_first = 
129 q->q_queue_pending.sqh_first; /* XXX */ 
130 SIMPLEQ_INIT(&q->q_queue_pending); 172 SIMPLEQ_INIT(&q->q_queue_pending);
 173
 174 /*
 175 * Mark the queue as actively running a batch of work
 176 * by setting the generation number odd.
 177 */
 178 q->q_gen |= 1;
131 mutex_exit(&q->q_mutex); 179 mutex_exit(&q->q_mutex);
132 180
133 workqueue_runlist(wq, &q->q_queue_running); 181 workqueue_runlist(wq, &tmp);
134 182
 183 /*
 184 * Notify workqueue_wait that we have completed a batch
 185 * of work by incrementing the generation number.
 186 */
135 mutex_enter(&q->q_mutex); 187 mutex_enter(&q->q_mutex);
136 KASSERT(!SIMPLEQ_EMPTY(&q->q_queue_running)); 188 KASSERTMSG(q->q_gen & 1, "q=%p gen=%"PRIu64, q, q->q_gen);
137 SIMPLEQ_INIT(&q->q_queue_running); 189 q->q_gen++;
138 if (__predict_false(q->q_waiter != NULL)) { 190 cv_broadcast(&q->q_cv);
139 /* Wake up workqueue_wait */ 
140 cv_signal(&q->q_cv); 
141 } 
142 mutex_exit(&q->q_mutex); 
143 } 191 }
 192 mutex_exit(&q->q_mutex);
144} 193}
145 194
146static void 195static void
147workqueue_init(struct workqueue *wq, const char *name, 196workqueue_init(struct workqueue *wq, const char *name,
148 void (*callback_func)(struct work *, void *), void *callback_arg, 197 void (*callback_func)(struct work *, void *), void *callback_arg,
149 pri_t prio, int ipl) 198 pri_t prio, int ipl)
150{ 199{
151 200
152 KASSERT(sizeof(wq->wq_name) > strlen(name)); 201 KASSERT(sizeof(wq->wq_name) > strlen(name));
153 strncpy(wq->wq_name, name, sizeof(wq->wq_name)); 202 strncpy(wq->wq_name, name, sizeof(wq->wq_name));
154 203
155 wq->wq_prio = prio; 204 wq->wq_prio = prio;
156 wq->wq_func = callback_func; 205 wq->wq_func = callback_func;
@@ -158,27 +207,27 @@ workqueue_init(struct workqueue *wq, con @@ -158,27 +207,27 @@ workqueue_init(struct workqueue *wq, con
158} 207}
159 208
160static int 209static int
161workqueue_initqueue(struct workqueue *wq, struct workqueue_queue *q, 210workqueue_initqueue(struct workqueue *wq, struct workqueue_queue *q,
162 int ipl, struct cpu_info *ci) 211 int ipl, struct cpu_info *ci)
163{ 212{
164 int error, ktf; 213 int error, ktf;
165 214
166 KASSERT(q->q_worker == NULL); 215 KASSERT(q->q_worker == NULL);
167 216
168 mutex_init(&q->q_mutex, MUTEX_DEFAULT, ipl); 217 mutex_init(&q->q_mutex, MUTEX_DEFAULT, ipl);
169 cv_init(&q->q_cv, wq->wq_name); 218 cv_init(&q->q_cv, wq->wq_name);
170 SIMPLEQ_INIT(&q->q_queue_pending); 219 SIMPLEQ_INIT(&q->q_queue_pending);
171 SIMPLEQ_INIT(&q->q_queue_running); 220 q->q_gen = 0;
172 ktf = ((wq->wq_flags & WQ_MPSAFE) != 0 ? KTHREAD_MPSAFE : 0); 221 ktf = ((wq->wq_flags & WQ_MPSAFE) != 0 ? KTHREAD_MPSAFE : 0);
173 if (wq->wq_prio < PRI_KERNEL) 222 if (wq->wq_prio < PRI_KERNEL)
174 ktf |= KTHREAD_TS; 223 ktf |= KTHREAD_TS;
175 if (ci) { 224 if (ci) {
176 error = kthread_create(wq->wq_prio, ktf, ci, workqueue_worker, 225 error = kthread_create(wq->wq_prio, ktf, ci, workqueue_worker,
177 wq, &q->q_worker, "%s/%u", wq->wq_name, ci->ci_index); 226 wq, &q->q_worker, "%s/%u", wq->wq_name, ci->ci_index);
178 } else { 227 } else {
179 error = kthread_create(wq->wq_prio, ktf, ci, workqueue_worker, 228 error = kthread_create(wq->wq_prio, ktf, ci, workqueue_worker,
180 wq, &q->q_worker, "%s", wq->wq_name); 229 wq, &q->q_worker, "%s", wq->wq_name);
181 } 230 }
182 if (error != 0) { 231 if (error != 0) {
183 mutex_destroy(&q->q_mutex); 232 mutex_destroy(&q->q_mutex);
184 cv_destroy(&q->q_cv); 233 cv_destroy(&q->q_cv);
@@ -196,44 +245,44 @@ static void @@ -196,44 +245,44 @@ static void
196workqueue_exit(struct work *wk, void *arg) 245workqueue_exit(struct work *wk, void *arg)
197{ 246{
198 struct workqueue_exitargs *wqe = (void *)wk; 247 struct workqueue_exitargs *wqe = (void *)wk;
199 struct workqueue_queue *q = wqe->wqe_q; 248 struct workqueue_queue *q = wqe->wqe_q;
200 249
201 /* 250 /*
202 * only competition at this point is workqueue_finiqueue. 251 * only competition at this point is workqueue_finiqueue.
203 */ 252 */
204 253
205 KASSERT(q->q_worker == curlwp); 254 KASSERT(q->q_worker == curlwp);
206 KASSERT(SIMPLEQ_EMPTY(&q->q_queue_pending)); 255 KASSERT(SIMPLEQ_EMPTY(&q->q_queue_pending));
207 mutex_enter(&q->q_mutex); 256 mutex_enter(&q->q_mutex);
208 q->q_worker = NULL; 257 q->q_worker = NULL;
209 cv_signal(&q->q_cv); 258 cv_broadcast(&q->q_cv);
210 mutex_exit(&q->q_mutex); 259 mutex_exit(&q->q_mutex);
211 kthread_exit(0); 260 kthread_exit(0);
212} 261}
213 262
214static void 263static void
215workqueue_finiqueue(struct workqueue *wq, struct workqueue_queue *q) 264workqueue_finiqueue(struct workqueue *wq, struct workqueue_queue *q)
216{ 265{
217 struct workqueue_exitargs wqe; 266 struct workqueue_exitargs wqe;
218 267
219 KASSERT(wq->wq_func == workqueue_exit); 268 KASSERT(wq->wq_func == workqueue_exit);
220 269
221 wqe.wqe_q = q; 270 wqe.wqe_q = q;
222 KASSERT(SIMPLEQ_EMPTY(&q->q_queue_pending)); 271 KASSERT(SIMPLEQ_EMPTY(&q->q_queue_pending));
223 KASSERT(q->q_worker != NULL); 272 KASSERT(q->q_worker != NULL);
224 mutex_enter(&q->q_mutex); 273 mutex_enter(&q->q_mutex);
225 SIMPLEQ_INSERT_TAIL(&q->q_queue_pending, &wqe.wqe_wk, wk_entry); 274 SIMPLEQ_INSERT_TAIL(&q->q_queue_pending, &wqe.wqe_wk, wk_entry);
226 cv_signal(&q->q_cv); 275 cv_broadcast(&q->q_cv);
227 while (q->q_worker != NULL) { 276 while (q->q_worker != NULL) {
228 cv_wait(&q->q_cv, &q->q_mutex); 277 cv_wait(&q->q_cv, &q->q_mutex);
229 } 278 }
230 mutex_exit(&q->q_mutex); 279 mutex_exit(&q->q_mutex);
231 mutex_destroy(&q->q_mutex); 280 mutex_destroy(&q->q_mutex);
232 cv_destroy(&q->q_cv); 281 cv_destroy(&q->q_cv);
233} 282}
234 283
235/* --- */ 284/* --- */
236 285
237int 286int
238workqueue_create(struct workqueue **wqp, const char *name, 287workqueue_create(struct workqueue **wqp, const char *name,
239 void (*callback_func)(struct work *, void *), void *callback_arg, 288 void (*callback_func)(struct work *, void *), void *callback_arg,
@@ -271,121 +320,153 @@ workqueue_create(struct workqueue **wqp, @@ -271,121 +320,153 @@ workqueue_create(struct workqueue **wqp,
271 error = workqueue_initqueue(wq, q, ipl, NULL); 320 error = workqueue_initqueue(wq, q, ipl, NULL);
272 } 321 }
273 322
274 if (error != 0) { 323 if (error != 0) {
275 workqueue_destroy(wq); 324 workqueue_destroy(wq);
276 } else { 325 } else {
277 *wqp = wq; 326 *wqp = wq;
278 } 327 }
279 328
280 return error; 329 return error;
281} 330}
282 331
283static bool 332static bool
284workqueue_q_wait(struct workqueue_queue *q, work_impl_t *wk_target) 333workqueue_q_wait(struct workqueue *wq, struct workqueue_queue *q,
 334 work_impl_t *wk_target)
285{ 335{
286 work_impl_t *wk; 336 work_impl_t *wk;
287 bool found = false; 337 bool found = false;
 338 uint64_t gen;
288 339
289 mutex_enter(&q->q_mutex); 340 mutex_enter(&q->q_mutex);
290 if (q->q_worker == curlwp) 341
 342 /*
 343 * Avoid a deadlock scenario. We can't guarantee that
 344 * wk_target has completed at this point, but we can't wait for
 345 * it either, so do nothing.
 346 *
 347 * XXX Are there use-cases that require this semantics?
 348 */
 349 if (q->q_worker == curlwp) {
 350 SDT_PROBE2(sdt, kernel, workqueue, wait__self, wq, wk_target);
291 goto out; 351 goto out;
 352 }
 353
 354 /*
 355 * Wait until the target is no longer pending. If we find it
 356 * on this queue, the caller can stop looking in other queues.
 357 * If we don't find it in this queue, however, we can't skip
 358 * waiting -- it may be hidden in the running queue which we
 359 * have no access to.
 360 */
292 again: 361 again:
293 SIMPLEQ_FOREACH(wk, &q->q_queue_pending, wk_entry) { 362 SIMPLEQ_FOREACH(wk, &q->q_queue_pending, wk_entry) {
294 if (wk == wk_target) 363 if (wk == wk_target) {
295 goto found; 364 SDT_PROBE2(sdt, kernel, workqueue, wait__hit, wq, wk);
 365 found = true;
 366 cv_wait(&q->q_cv, &q->q_mutex);
 367 goto again;
 368 }
296 } 369 }
297 SIMPLEQ_FOREACH(wk, &q->q_queue_running, wk_entry) { 370
298 if (wk == wk_target) 371 /*
299 goto found; 372 * The target may be in the batch of work currently running,
300 } 373 * but we can't touch that queue. So if there's anything
301 found: 374 * running, wait until the generation changes.
302 if (wk != NULL) { 375 */
303 found = true; 376 gen = q->q_gen;
304 KASSERT(q->q_waiter == NULL); 377 if (gen & 1) {
305 q->q_waiter = wk; 378 do
306 cv_wait(&q->q_cv, &q->q_mutex); 379 cv_wait(&q->q_cv, &q->q_mutex);
307 goto again; 380 while (gen == q->q_gen);
308 } 381 }
309 if (q->q_waiter != NULL) 382
310 q->q_waiter = NULL; 
311 out: 383 out:
312 mutex_exit(&q->q_mutex); 384 mutex_exit(&q->q_mutex);
313 385
314 return found; 386 return found;
315} 387}
316 388
317/* 389/*
318 * Wait for a specified work to finish. The caller must ensure that no new 390 * Wait for a specified work to finish. The caller must ensure that no new
319 * work will be enqueued before calling workqueue_wait. Note that if the 391 * work will be enqueued before calling workqueue_wait. Note that if the
320 * workqueue is WQ_PERCPU, the caller can enqueue a new work to another queue 392 * workqueue is WQ_PERCPU, the caller can enqueue a new work to another queue
321 * other than the waiting queue. 393 * other than the waiting queue.
322 */ 394 */
323void 395void
324workqueue_wait(struct workqueue *wq, struct work *wk) 396workqueue_wait(struct workqueue *wq, struct work *wk)
325{ 397{
326 struct workqueue_queue *q; 398 struct workqueue_queue *q;
327 bool found; 399 bool found;
328 400
 401 ASSERT_SLEEPABLE();
 402
 403 SDT_PROBE2(sdt, kernel, workqueue, wait__start, wq, wk);
329 if (ISSET(wq->wq_flags, WQ_PERCPU)) { 404 if (ISSET(wq->wq_flags, WQ_PERCPU)) {
330 struct cpu_info *ci; 405 struct cpu_info *ci;
331 CPU_INFO_ITERATOR cii; 406 CPU_INFO_ITERATOR cii;
332 for (CPU_INFO_FOREACH(cii, ci)) { 407 for (CPU_INFO_FOREACH(cii, ci)) {
333 q = workqueue_queue_lookup(wq, ci); 408 q = workqueue_queue_lookup(wq, ci);
334 found = workqueue_q_wait(q, (work_impl_t *)wk); 409 found = workqueue_q_wait(wq, q, (work_impl_t *)wk);
335 if (found) 410 if (found)
336 break; 411 break;
337 } 412 }
338 } else { 413 } else {
339 q = workqueue_queue_lookup(wq, NULL); 414 q = workqueue_queue_lookup(wq, NULL);
340 (void) workqueue_q_wait(q, (work_impl_t *)wk); 415 (void)workqueue_q_wait(wq, q, (work_impl_t *)wk);
341 } 416 }
 417 SDT_PROBE2(sdt, kernel, workqueue, wait__done, wq, wk);
342} 418}
343 419
344void 420void
345workqueue_destroy(struct workqueue *wq) 421workqueue_destroy(struct workqueue *wq)
346{ 422{
347 struct workqueue_queue *q; 423 struct workqueue_queue *q;
348 struct cpu_info *ci; 424 struct cpu_info *ci;
349 CPU_INFO_ITERATOR cii; 425 CPU_INFO_ITERATOR cii;
350 426
 427 ASSERT_SLEEPABLE();
 428
 429 SDT_PROBE1(sdt, kernel, workqueue, exit__start, wq);
351 wq->wq_func = workqueue_exit; 430 wq->wq_func = workqueue_exit;
352 for (CPU_INFO_FOREACH(cii, ci)) { 431 for (CPU_INFO_FOREACH(cii, ci)) {
353 q = workqueue_queue_lookup(wq, ci); 432 q = workqueue_queue_lookup(wq, ci);
354 if (q->q_worker != NULL) { 433 if (q->q_worker != NULL) {
355 workqueue_finiqueue(wq, q); 434 workqueue_finiqueue(wq, q);
356 } 435 }
357 } 436 }
 437 SDT_PROBE1(sdt, kernel, workqueue, exit__done, wq);
358 kmem_free(wq->wq_ptr, workqueue_size(wq->wq_flags)); 438 kmem_free(wq->wq_ptr, workqueue_size(wq->wq_flags));
359} 439}
360 440
361#ifdef DEBUG 441#ifdef DEBUG
362static void 442static void
363workqueue_check_duplication(struct workqueue_queue *q, work_impl_t *wk) 443workqueue_check_duplication(struct workqueue_queue *q, work_impl_t *wk)
364{ 444{
365 work_impl_t *_wk; 445 work_impl_t *_wk;
366 446
367 SIMPLEQ_FOREACH(_wk, &q->q_queue_pending, wk_entry) { 447 SIMPLEQ_FOREACH(_wk, &q->q_queue_pending, wk_entry) {
368 if (_wk == wk) 448 if (_wk == wk)
369 panic("%s: tried to enqueue a queued work", __func__); 449 panic("%s: tried to enqueue a queued work", __func__);
370 } 450 }
371} 451}
372#endif 452#endif
373 453
374void 454void
375workqueue_enqueue(struct workqueue *wq, struct work *wk0, struct cpu_info *ci) 455workqueue_enqueue(struct workqueue *wq, struct work *wk0, struct cpu_info *ci)
376{ 456{
377 struct workqueue_queue *q; 457 struct workqueue_queue *q;
378 work_impl_t *wk = (void *)wk0; 458 work_impl_t *wk = (void *)wk0;
379 459
 460 SDT_PROBE3(sdt, kernel, workqueue, enqueue, wq, wk0, ci);
 461
380 KASSERT(wq->wq_flags & WQ_PERCPU || ci == NULL); 462 KASSERT(wq->wq_flags & WQ_PERCPU || ci == NULL);
381 q = workqueue_queue_lookup(wq, ci); 463 q = workqueue_queue_lookup(wq, ci);
382 464
383 mutex_enter(&q->q_mutex); 465 mutex_enter(&q->q_mutex);
384 KASSERT(q->q_waiter == NULL); 
385#ifdef DEBUG 466#ifdef DEBUG
386 workqueue_check_duplication(q, wk); 467 workqueue_check_duplication(q, wk);
387#endif 468#endif
388 SIMPLEQ_INSERT_TAIL(&q->q_queue_pending, wk, wk_entry); 469 SIMPLEQ_INSERT_TAIL(&q->q_queue_pending, wk, wk_entry);
389 cv_signal(&q->q_cv); 470 cv_broadcast(&q->q_cv);
390 mutex_exit(&q->q_mutex); 471 mutex_exit(&q->q_mutex);
391} 472}

cvs diff -r1.8 -r1.8.2.1 src/tests/rump/kernspace/kernspace.h (expand / switch to unified diff)

--- src/tests/rump/kernspace/kernspace.h 2018/12/28 19:54:36 1.8
+++ src/tests/rump/kernspace/kernspace.h 2024/04/18 15:51:35 1.8.2.1
@@ -1,14 +1,14 @@ @@ -1,14 +1,14 @@
1/* $NetBSD: kernspace.h,v 1.8 2018/12/28 19:54:36 thorpej Exp $ */ 1/* $NetBSD: kernspace.h,v 1.8.2.1 2024/04/18 15:51:35 martin Exp $ */
2 2
3/*- 3/*-
4 * Copyright (c) 2010, 2018 The NetBSD Foundation, Inc. 4 * Copyright (c) 2010, 2018 The NetBSD Foundation, Inc.
5 * All rights reserved. 5 * All rights reserved.
6 * 6 *
7 * Redistribution and use in source and binary forms, with or without 7 * Redistribution and use in source and binary forms, with or without
8 * modification, are permitted provided that the following conditions 8 * modification, are permitted provided that the following conditions
9 * are met: 9 * are met:
10 * 1. Redistributions of source code must retain the above copyright 10 * 1. Redistributions of source code must retain the above copyright
11 * notice, this list of conditions and the following disclaimer. 11 * notice, this list of conditions and the following disclaimer.
12 * 2. Redistributions in binary form must reproduce the above copyright 12 * 2. Redistributions in binary form must reproduce the above copyright
13 * notice, this list of conditions and the following disclaimer in the 13 * notice, this list of conditions and the following disclaimer in the
14 * documentation and/or other materials provided with the distribution. 14 * documentation and/or other materials provided with the distribution.
@@ -32,25 +32,26 @@ @@ -32,25 +32,26 @@
32 32
33enum locktest { LOCKME_MTX, LOCKME_RWDOUBLEX, LOCKME_RWRX, LOCKME_RWXR, 33enum locktest { LOCKME_MTX, LOCKME_RWDOUBLEX, LOCKME_RWRX, LOCKME_RWXR,
34 LOCKME_DESTROYHELD, LOCKME_DOUBLEINIT, LOCKME_DOUBLEFREE, 34 LOCKME_DESTROYHELD, LOCKME_DOUBLEINIT, LOCKME_DOUBLEFREE,
35 LOCKME_MEMFREE }; 35 LOCKME_MEMFREE };
36 36
37void rumptest_busypage(void); 37void rumptest_busypage(void);
38void rumptest_threadjoin(void); 38void rumptest_threadjoin(void);
39void rumptest_thread(void); 39void rumptest_thread(void);
40void rumptest_tsleep(void); 40void rumptest_tsleep(void);
41void rumptest_alloc(size_t); 41void rumptest_alloc(size_t);
42void rumptest_lockme(enum locktest); 42void rumptest_lockme(enum locktest);
43void rumptest_workqueue1(void); 43void rumptest_workqueue1(void);
44void rumptest_workqueue_wait(void); 44void rumptest_workqueue_wait(void);
 45void rumptest_workqueue_wait_pause(void);
45 46
46void rumptest_sendsig(char *); 47void rumptest_sendsig(char *);
47void rumptest_localsig(int); 48void rumptest_localsig(int);
48 49
49void rumptest_threadpool_unbound_lifecycle(void); 50void rumptest_threadpool_unbound_lifecycle(void);
50void rumptest_threadpool_percpu_lifecycle(void); 51void rumptest_threadpool_percpu_lifecycle(void);
51void rumptest_threadpool_unbound_schedule(void); 52void rumptest_threadpool_unbound_schedule(void);
52void rumptest_threadpool_percpu_schedule(void); 53void rumptest_threadpool_percpu_schedule(void);
53void rumptest_threadpool_job_cancel(void); 54void rumptest_threadpool_job_cancel(void);
54void rumptest_threadpool_job_cancelthrash(void); 55void rumptest_threadpool_job_cancelthrash(void);
55 56
56#endif /* _TESTS_RUMP_KERNSPACE_KERNSPACE_H_ */ 57#endif /* _TESTS_RUMP_KERNSPACE_KERNSPACE_H_ */

cvs diff -r1.6 -r1.6.8.1 src/tests/rump/kernspace/workqueue.c (expand / switch to unified diff)

--- src/tests/rump/kernspace/workqueue.c 2017/12/28 07:46:34 1.6
+++ src/tests/rump/kernspace/workqueue.c 2024/04/18 15:51:35 1.6.8.1
@@ -1,14 +1,14 @@ @@ -1,14 +1,14 @@
1/* $NetBSD: workqueue.c,v 1.6 2017/12/28 07:46:34 ozaki-r Exp $ */ 1/* $NetBSD: workqueue.c,v 1.6.8.1 2024/04/18 15:51:35 martin Exp $ */
2 2
3/*- 3/*-
4 * Copyright (c) 2017 The NetBSD Foundation, Inc. 4 * Copyright (c) 2017 The NetBSD Foundation, Inc.
5 * All rights reserved. 5 * All rights reserved.
6 * 6 *
7 * Redistribution and use in source and binary forms, with or without 7 * Redistribution and use in source and binary forms, with or without
8 * modification, are permitted provided that the following conditions 8 * modification, are permitted provided that the following conditions
9 * are met: 9 * are met:
10 * 1. Redistributions of source code must retain the above copyright 10 * 1. Redistributions of source code must retain the above copyright
11 * notice, this list of conditions and the following disclaimer. 11 * notice, this list of conditions and the following disclaimer.
12 * 2. Redistributions in binary form must reproduce the above copyright 12 * 2. Redistributions in binary form must reproduce the above copyright
13 * notice, this list of conditions and the following disclaimer in the 13 * notice, this list of conditions and the following disclaimer in the
14 * documentation and/or other materials provided with the distribution. 14 * documentation and/or other materials provided with the distribution.
@@ -19,52 +19,58 @@ @@ -19,52 +19,58 @@
19 * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. 19 * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
20 * IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS BE LIABLE FOR ANY 20 * IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS BE LIABLE FOR ANY
21 * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 21 * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
22 * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE 22 * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE
23 * GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS 23 * GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
24 * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER 24 * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER
25 * IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR 25 * IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
26 * OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN 26 * OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN
27 * IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 27 * IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
28 */ 28 */
29 29
30#include <sys/cdefs.h> 30#include <sys/cdefs.h>
31#if !defined(lint) 31#if !defined(lint)
32__RCSID("$NetBSD: workqueue.c,v 1.6 2017/12/28 07:46:34 ozaki-r Exp $"); 32__RCSID("$NetBSD: workqueue.c,v 1.6.8.1 2024/04/18 15:51:35 martin Exp $");
33#endif /* !lint */ 33#endif /* !lint */
34 34
35#include <sys/param.h> 35#include <sys/param.h>
36#include <sys/condvar.h> 36#include <sys/condvar.h>
37#include <sys/kernel.h> 37#include <sys/kernel.h>
38#include <sys/kmem.h> 38#include <sys/kmem.h>
39#include <sys/kthread.h> 39#include <sys/kthread.h>
40#include <sys/mutex.h> 40#include <sys/mutex.h>
41#include <sys/workqueue.h> 41#include <sys/workqueue.h>
42 42
43#include "kernspace.h" 43#include "kernspace.h"
44 44
45struct test_softc { 45struct test_softc {
46 kmutex_t mtx; 46 kmutex_t mtx;
47 kcondvar_t cv; 47 kcondvar_t cv;
48 struct workqueue *wq; 48 struct workqueue *wq;
49 struct work wk; 49 struct work wk;
50 int counter; 50 int counter;
51};  51 bool pause;
52  52};
 53
53static void 54static void
54rump_work1(struct work *wk, void *arg) 55rump_work1(struct work *wk, void *arg)
55{ 56{
56 struct test_softc *sc = arg; 57 struct test_softc *sc = arg;
57 58
 59 memset(wk, 0x5a, sizeof(*wk));
 60
 61 if (sc->pause)
 62 kpause("tstwk1", /*intr*/false, /*timo*/2, /*lock*/NULL);
 63
58 mutex_enter(&sc->mtx); 64 mutex_enter(&sc->mtx);
59 ++sc->counter; 65 ++sc->counter;
60 cv_broadcast(&sc->cv); 66 cv_broadcast(&sc->cv);
61 mutex_exit(&sc->mtx); 67 mutex_exit(&sc->mtx);
62} 68}
63 69
64static struct test_softc * 70static struct test_softc *
65create_sc(void) 71create_sc(void)
66{ 72{
67 int rv; 73 int rv;
68 struct test_softc *sc; 74 struct test_softc *sc;
69 75
70 sc = kmem_zalloc(sizeof(*sc), KM_SLEEP); 76 sc = kmem_zalloc(sizeof(*sc), KM_SLEEP);
@@ -127,13 +133,44 @@ rumptest_workqueue_wait(void) @@ -127,13 +133,44 @@ rumptest_workqueue_wait(void)
127 workqueue_enqueue(sc->wq, &sc->wk, NULL); 133 workqueue_enqueue(sc->wq, &sc->wk, NULL);
128 workqueue_wait(sc->wq, &sc->wk); 134 workqueue_wait(sc->wq, &sc->wk);
129 KASSERT(sc->counter == (i + 1)); 135 KASSERT(sc->counter == (i + 1));
130 } 136 }
131 137
132 KASSERT(sc->counter == ITERATIONS); 138 KASSERT(sc->counter == ITERATIONS);
133 139
134 /* Wait for a work that is not enqueued. Just return immediately. */ 140 /* Wait for a work that is not enqueued. Just return immediately. */
135 workqueue_wait(sc->wq, &dummy); 141 workqueue_wait(sc->wq, &dummy);
136 142
137 destroy_sc(sc); 143 destroy_sc(sc);
138#undef ITERATIONS 144#undef ITERATIONS
139} 145}
 146
 147void
 148rumptest_workqueue_wait_pause(void)
 149{
 150 struct test_softc *sc;
 151 struct work dummy;
 152
 153 sc = create_sc();
 154 sc->pause = true;
 155
 156#define ITERATIONS 1
 157 for (size_t i = 0; i < ITERATIONS; ++i) {
 158 struct work wk;
 159
 160 KASSERT(sc->counter == i);
 161 workqueue_enqueue(sc->wq, &wk, NULL);
 162 workqueue_enqueue(sc->wq, &sc->wk, NULL);
 163 kpause("tstwk2", /*intr*/false, /*timo*/1, /*lock*/NULL);
 164 workqueue_wait(sc->wq, &sc->wk);
 165 workqueue_wait(sc->wq, &wk);
 166 KASSERT(sc->counter == (i + 2));
 167 }
 168
 169 KASSERT(sc->counter == 2*ITERATIONS);
 170
 171 /* Wait for a work that is not enqueued. Just return immediately. */
 172 workqueue_wait(sc->wq, &dummy);
 173
 174 destroy_sc(sc);
 175#undef ITERATIONS
 176}

cvs diff -r1.18 -r1.18.2.1 src/tests/rump/rumpkern/Makefile (expand / switch to unified diff)

--- src/tests/rump/rumpkern/Makefile 2018/12/26 14:27:23 1.18
+++ src/tests/rump/rumpkern/Makefile 2024/04/18 15:51:35 1.18.2.1
@@ -1,33 +1,33 @@ @@ -1,33 +1,33 @@
1# $NetBSD: Makefile,v 1.18 2018/12/26 14:27:23 thorpej Exp $ 1# $NetBSD: Makefile,v 1.18.2.1 2024/04/18 15:51:35 martin Exp $
2 2
3.include <bsd.own.mk> 3.include <bsd.own.mk>
4 4
5TESTSDIR= ${TESTSBASE}/rump/rumpkern 5TESTSDIR= ${TESTSBASE}/rump/rumpkern
6 6
7TESTS_C= t_copy 7TESTS_C= t_copy
8TESTS_C+= t_kern 8TESTS_C+= t_kern
9TESTS_C+= t_lwproc 9TESTS_C+= t_lwproc
10TESTS_C+= t_modcmd 10TESTS_C+= t_modcmd
11TESTS_C+= t_modlinkset 11TESTS_C+= t_modlinkset
12TESTS_C+= t_signals 12TESTS_C+= t_signals
13TESTS_C+= t_threads 13TESTS_C+= t_threads
14TESTS_C+= t_threadpool 14TESTS_C+= t_threadpool
15TESTS_C+= t_tsleep 15TESTS_C+= t_tsleep
16TESTS_C+= t_workqueue 16TESTS_C+= t_workqueue
17TESTS_C+= t_vm 17TESTS_C+= t_vm
18 18
19TESTS_SH= t_sp 19TESTS_SH= t_sp
20 20
21SUBDIR+= h_client h_server 21SUBDIR+= h_client h_server
22 22
23ADD_TO_LD= -lrumpvfs -lrump -lrumpuser -lrump -lpthread 23ADD_TO_LD= -lrumpvfs -lrump -lrumpuser -lrump -lpthread
24LDADD.t_modlinkset+= -lukfs -lrumpdev_disk -lrumpdev -lrumpfs_msdos  24LDADD.t_modlinkset+= -lukfs -lrumpdev_disk -lrumpdev -lrumpfs_msdos
25LDADD.t_modlinkset+= -lrumpfs_cd9660 ${ADD_TO_LD} 25LDADD.t_modlinkset+= -lrumpfs_cd9660 ${ADD_TO_LD}
26LDADD+= ${ADD_TO_LD} 26LDADD+= ${ADD_TO_LD}
27 27
28KERNSPACE != cd ${.CURDIR}/../kernspace && ${PRINTOBJDIR} 28PROGDPLIBS+= kernspace ${.CURDIR}/../kernspace
29LDADD+= -L${KERNSPACE} -lkernspace -lrump 29LDADD+= -lrump
30 30
31WARNS= 4 31WARNS= 4
32 32
33.include <bsd.test.mk> 33.include <bsd.test.mk>

cvs diff -r1.2 -r1.2.8.1 src/tests/rump/rumpkern/t_workqueue.c (expand / switch to unified diff)

--- src/tests/rump/rumpkern/t_workqueue.c 2017/12/28 07:10:26 1.2
+++ src/tests/rump/rumpkern/t_workqueue.c 2024/04/18 15:51:35 1.2.8.1
@@ -1,14 +1,14 @@ @@ -1,14 +1,14 @@
1/* $NetBSD: t_workqueue.c,v 1.2 2017/12/28 07:10:26 ozaki-r Exp $ */ 1/* $NetBSD: t_workqueue.c,v 1.2.8.1 2024/04/18 15:51:35 martin Exp $ */
2 2
3/*- 3/*-
4 * Copyright (c) 2017 The NetBSD Foundation, Inc. 4 * Copyright (c) 2017 The NetBSD Foundation, Inc.
5 * All rights reserved. 5 * All rights reserved.
6 * 6 *
7 * Redistribution and use in source and binary forms, with or without 7 * Redistribution and use in source and binary forms, with or without
8 * modification, are permitted provided that the following conditions 8 * modification, are permitted provided that the following conditions
9 * are met: 9 * are met:
10 * 1. Redistributions of source code must retain the above copyright 10 * 1. Redistributions of source code must retain the above copyright
11 * notice, this list of conditions and the following disclaimer. 11 * notice, this list of conditions and the following disclaimer.
12 * 2. Redistributions in binary form must reproduce the above copyright 12 * 2. Redistributions in binary form must reproduce the above copyright
13 * notice, this list of conditions and the following disclaimer in the 13 * notice, this list of conditions and the following disclaimer in the
14 * documentation and/or other materials provided with the distribution. 14 * documentation and/or other materials provided with the distribution.
@@ -62,20 +62,46 @@ ATF_TC_HEAD(workqueue_wait, tc) @@ -62,20 +62,46 @@ ATF_TC_HEAD(workqueue_wait, tc)
62 atf_tc_set_md_var(tc, "descr", "Checks workqueue_wait"); 62 atf_tc_set_md_var(tc, "descr", "Checks workqueue_wait");
63} 63}
64 64
65ATF_TC_BODY(workqueue_wait, tc) 65ATF_TC_BODY(workqueue_wait, tc)
66{ 66{
67 67
68 rump_init(); 68 rump_init();
69 69
70 rump_schedule(); 70 rump_schedule();
71 rumptest_workqueue_wait(); /* panics if fails */ 71 rumptest_workqueue_wait(); /* panics if fails */
72 rump_unschedule(); 72 rump_unschedule();
73} 73}
74 74
 75static void
 76sigsegv(int signo)
 77{
 78 atf_tc_fail("SIGSEGV");
 79}
 80
 81ATF_TC(workqueue_wait_pause);
 82ATF_TC_HEAD(workqueue_wait_pause, tc)
 83{
 84
 85 atf_tc_set_md_var(tc, "descr", "Checks workqueue_wait with pause");
 86}
 87
 88ATF_TC_BODY(workqueue_wait_pause, tc)
 89{
 90
 91 REQUIRE_LIBC(signal(SIGSEGV, &sigsegv), SIG_ERR);
 92
 93 rump_init();
 94
 95 rump_schedule();
 96 rumptest_workqueue_wait_pause(); /* panics or SIGSEGVs if fails */
 97 rump_unschedule();
 98}
 99
75ATF_TP_ADD_TCS(tp) 100ATF_TP_ADD_TCS(tp)
76{ 101{
77 ATF_TP_ADD_TC(tp, workqueue1); 102 ATF_TP_ADD_TC(tp, workqueue1);
78 ATF_TP_ADD_TC(tp, workqueue_wait); 103 ATF_TP_ADD_TC(tp, workqueue_wait);
 104 ATF_TP_ADD_TC(tp, workqueue_wait_pause);
79 105
80 return atf_no_error(); 106 return atf_no_error();
81} 107}