^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1) // SPDX-License-Identifier: GPL-2.0-only
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3) * VMware vSockets Driver
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4) *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5) * Copyright (C) 2007-2013 VMware, Inc. All rights reserved.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8) /* Implementation notes:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 9) *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 10) * - There are two kinds of sockets: those created by user action (such as
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 11) * calling socket(2)) and those created by incoming connection request packets.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 12) *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 13) * - There are two "global" tables, one for bound sockets (sockets that have
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 14) * specified an address that they are responsible for) and one for connected
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 15) * sockets (sockets that have established a connection with another socket).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 16) * These tables are "global" in that all sockets on the system are placed
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 17) * within them. - Note, though, that the bound table contains an extra entry
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 18) * for a list of unbound sockets and SOCK_DGRAM sockets will always remain in
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 19) * that list. The bound table is used solely for lookup of sockets when packets
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 20) * are received and that's not necessary for SOCK_DGRAM sockets since we create
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 21) * a datagram handle for each and need not perform a lookup. Keeping SOCK_DGRAM
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 22) * sockets out of the bound hash buckets will reduce the chance of collisions
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 23) * when looking for SOCK_STREAM sockets and prevents us from having to check the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 24) * socket type in the hash table lookups.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 25) *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 26) * - Sockets created by user action will either be "client" sockets that
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 27) * initiate a connection or "server" sockets that listen for connections; we do
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 28) * not support simultaneous connects (two "client" sockets connecting).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 29) *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 30) * - "Server" sockets are referred to as listener sockets throughout this
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 31) * implementation because they are in the TCP_LISTEN state. When a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 32) * connection request is received (the second kind of socket mentioned above),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 33) * we create a new socket and refer to it as a pending socket. These pending
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 34) * sockets are placed on the pending connection list of the listener socket.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 35) * When future packets are received for the address the listener socket is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 36) * bound to, we check if the source of the packet is from one that has an
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 37) * existing pending connection. If it does, we process the packet for the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 38) * pending socket. When that socket reaches the connected state, it is removed
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 39) * from the listener socket's pending list and enqueued in the listener
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 40) * socket's accept queue. Callers of accept(2) will accept connected sockets
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 41) * from the listener socket's accept queue. If the socket cannot be accepted
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 42) * for some reason then it is marked rejected. Once the connection is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 43) * accepted, it is owned by the user process and the responsibility for cleanup
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 44) * falls with that user process.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 45) *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 46) * - It is possible that these pending sockets will never reach the connected
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 47) * state; in fact, we may never receive another packet after the connection
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 48) * request. Because of this, we must schedule a cleanup function to run in the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 49) * future, after some amount of time passes where a connection should have been
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 50) * established. This function ensures that the socket is off all lists so it
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 51) * cannot be retrieved, then drops all references to the socket so it is cleaned
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 52) * up (sock_put() -> sk_free() -> our sk_destruct implementation). Note this
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 53) * function will also cleanup rejected sockets, those that reach the connected
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 54) * state but leave it before they have been accepted.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 55) *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 56) * - Lock ordering for pending or accept queue sockets is:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 57) *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 58) * lock_sock(listener);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 59) * lock_sock_nested(pending, SINGLE_DEPTH_NESTING);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 60) *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 61) * Using explicit nested locking keeps lockdep happy since normally only one
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 62) * lock of a given class may be taken at a time.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 63) *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 64) * - Sockets created by user action will be cleaned up when the user process
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 65) * calls close(2), causing our release implementation to be called. Our release
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 66) * implementation will perform some cleanup then drop the last reference so our
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 67) * sk_destruct implementation is invoked. Our sk_destruct implementation will
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 68) * perform additional cleanup that's common for both types of sockets.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 69) *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 70) * - A socket's reference count is what ensures that the structure won't be
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 71) * freed. Each entry in a list (such as the "global" bound and connected tables
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 72) * and the listener socket's pending list and connected queue) ensures a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 73) * reference. When we defer work until process context and pass a socket as our
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 74) * argument, we must ensure the reference count is increased to ensure the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 75) * socket isn't freed before the function is run; the deferred function will
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 76) * then drop the reference.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 77) *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 78) * - sk->sk_state uses the TCP state constants because they are widely used by
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 79) * other address families and exposed to userspace tools like ss(8):
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 80) *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 81) * TCP_CLOSE - unconnected
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 82) * TCP_SYN_SENT - connecting
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 83) * TCP_ESTABLISHED - connected
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 84) * TCP_CLOSING - disconnecting
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 85) * TCP_LISTEN - listening
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 86) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 87)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 88) #include <linux/types.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 89) #include <linux/bitops.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 90) #include <linux/cred.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 91) #include <linux/init.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 92) #include <linux/io.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 93) #include <linux/kernel.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 94) #include <linux/sched/signal.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 95) #include <linux/kmod.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 96) #include <linux/list.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 97) #include <linux/miscdevice.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 98) #include <linux/module.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 99) #include <linux/mutex.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 100) #include <linux/net.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 101) #include <linux/poll.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 102) #include <linux/random.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 103) #include <linux/skbuff.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 104) #include <linux/smp.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 105) #include <linux/socket.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 106) #include <linux/stddef.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 107) #include <linux/unistd.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 108) #include <linux/wait.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 109) #include <linux/workqueue.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 110) #include <net/sock.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 111) #include <net/af_vsock.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 112)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 113) static int __vsock_bind(struct sock *sk, struct sockaddr_vm *addr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 114) static void vsock_sk_destruct(struct sock *sk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 115) static int vsock_queue_rcv_skb(struct sock *sk, struct sk_buff *skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 116)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 117) /* Protocol family. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 118) static struct proto vsock_proto = {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 119) .name = "AF_VSOCK",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 120) .owner = THIS_MODULE,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 121) .obj_size = sizeof(struct vsock_sock),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 122) };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 123)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 124) /* The default peer timeout indicates how long we will wait for a peer response
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 125) * to a control message.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 126) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 127) #define VSOCK_DEFAULT_CONNECT_TIMEOUT (2 * HZ)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 128)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 129) #define VSOCK_DEFAULT_BUFFER_SIZE (1024 * 256)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 130) #define VSOCK_DEFAULT_BUFFER_MAX_SIZE (1024 * 256)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 131) #define VSOCK_DEFAULT_BUFFER_MIN_SIZE 128
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 132)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 133) /* Transport used for host->guest communication */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 134) static const struct vsock_transport *transport_h2g;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 135) /* Transport used for guest->host communication */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 136) static const struct vsock_transport *transport_g2h;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 137) /* Transport used for DGRAM communication */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 138) static const struct vsock_transport *transport_dgram;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 139) /* Transport used for local communication */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 140) static const struct vsock_transport *transport_local;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 141) static DEFINE_MUTEX(vsock_register_mutex);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 142)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 143) /**** UTILS ****/
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 144)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 145) /* Each bound VSocket is stored in the bind hash table and each connected
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 146) * VSocket is stored in the connected hash table.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 147) *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 148) * Unbound sockets are all put on the same list attached to the end of the hash
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 149) * table (vsock_unbound_sockets). Bound sockets are added to the hash table in
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 150) * the bucket that their local address hashes to (vsock_bound_sockets(addr)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 151) * represents the list that addr hashes to).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 152) *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 153) * Specifically, we initialize the vsock_bind_table array to a size of
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 154) * VSOCK_HASH_SIZE + 1 so that vsock_bind_table[0] through
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 155) * vsock_bind_table[VSOCK_HASH_SIZE - 1] are for bound sockets and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 156) * vsock_bind_table[VSOCK_HASH_SIZE] is for unbound sockets. The hash function
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 157) * mods with VSOCK_HASH_SIZE to ensure this.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 158) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 159) #define MAX_PORT_RETRIES 24
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 160)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 161) #define VSOCK_HASH(addr) ((addr)->svm_port % VSOCK_HASH_SIZE)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 162) #define vsock_bound_sockets(addr) (&vsock_bind_table[VSOCK_HASH(addr)])
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 163) #define vsock_unbound_sockets (&vsock_bind_table[VSOCK_HASH_SIZE])
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 164)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 165) /* XXX This can probably be implemented in a better way. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 166) #define VSOCK_CONN_HASH(src, dst) \
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 167) (((src)->svm_cid ^ (dst)->svm_port) % VSOCK_HASH_SIZE)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 168) #define vsock_connected_sockets(src, dst) \
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 169) (&vsock_connected_table[VSOCK_CONN_HASH(src, dst)])
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 170) #define vsock_connected_sockets_vsk(vsk) \
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 171) vsock_connected_sockets(&(vsk)->remote_addr, &(vsk)->local_addr)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 172)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 173) struct list_head vsock_bind_table[VSOCK_HASH_SIZE + 1];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 174) EXPORT_SYMBOL_GPL(vsock_bind_table);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 175) struct list_head vsock_connected_table[VSOCK_HASH_SIZE];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 176) EXPORT_SYMBOL_GPL(vsock_connected_table);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 177) DEFINE_SPINLOCK(vsock_table_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 178) EXPORT_SYMBOL_GPL(vsock_table_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 179)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 180) /* Autobind this socket to the local address if necessary. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 181) static int vsock_auto_bind(struct vsock_sock *vsk)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 182) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 183) struct sock *sk = sk_vsock(vsk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 184) struct sockaddr_vm local_addr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 185)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 186) if (vsock_addr_bound(&vsk->local_addr))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 187) return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 188) vsock_addr_init(&local_addr, VMADDR_CID_ANY, VMADDR_PORT_ANY);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 189) return __vsock_bind(sk, &local_addr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 190) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 191)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 192) static void vsock_init_tables(void)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 193) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 194) int i;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 195)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 196) for (i = 0; i < ARRAY_SIZE(vsock_bind_table); i++)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 197) INIT_LIST_HEAD(&vsock_bind_table[i]);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 198)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 199) for (i = 0; i < ARRAY_SIZE(vsock_connected_table); i++)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 200) INIT_LIST_HEAD(&vsock_connected_table[i]);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 201) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 202)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 203) static void __vsock_insert_bound(struct list_head *list,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 204) struct vsock_sock *vsk)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 205) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 206) sock_hold(&vsk->sk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 207) list_add(&vsk->bound_table, list);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 208) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 209)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 210) static void __vsock_insert_connected(struct list_head *list,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 211) struct vsock_sock *vsk)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 212) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 213) sock_hold(&vsk->sk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 214) list_add(&vsk->connected_table, list);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 215) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 216)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 217) static void __vsock_remove_bound(struct vsock_sock *vsk)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 218) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 219) list_del_init(&vsk->bound_table);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 220) sock_put(&vsk->sk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 221) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 222)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 223) static void __vsock_remove_connected(struct vsock_sock *vsk)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 224) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 225) list_del_init(&vsk->connected_table);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 226) sock_put(&vsk->sk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 227) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 228)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 229) static struct sock *__vsock_find_bound_socket(struct sockaddr_vm *addr)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 230) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 231) struct vsock_sock *vsk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 232)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 233) list_for_each_entry(vsk, vsock_bound_sockets(addr), bound_table) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 234) if (vsock_addr_equals_addr(addr, &vsk->local_addr))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 235) return sk_vsock(vsk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 236)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 237) if (addr->svm_port == vsk->local_addr.svm_port &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 238) (vsk->local_addr.svm_cid == VMADDR_CID_ANY ||
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 239) addr->svm_cid == VMADDR_CID_ANY))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 240) return sk_vsock(vsk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 241) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 242)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 243) return NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 244) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 245)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 246) static struct sock *__vsock_find_connected_socket(struct sockaddr_vm *src,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 247) struct sockaddr_vm *dst)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 248) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 249) struct vsock_sock *vsk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 250)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 251) list_for_each_entry(vsk, vsock_connected_sockets(src, dst),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 252) connected_table) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 253) if (vsock_addr_equals_addr(src, &vsk->remote_addr) &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 254) dst->svm_port == vsk->local_addr.svm_port) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 255) return sk_vsock(vsk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 256) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 257) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 258)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 259) return NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 260) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 261)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 262) static void vsock_insert_unbound(struct vsock_sock *vsk)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 263) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 264) spin_lock_bh(&vsock_table_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 265) __vsock_insert_bound(vsock_unbound_sockets, vsk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 266) spin_unlock_bh(&vsock_table_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 267) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 268)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 269) void vsock_insert_connected(struct vsock_sock *vsk)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 270) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 271) struct list_head *list = vsock_connected_sockets(
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 272) &vsk->remote_addr, &vsk->local_addr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 273)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 274) spin_lock_bh(&vsock_table_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 275) __vsock_insert_connected(list, vsk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 276) spin_unlock_bh(&vsock_table_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 277) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 278) EXPORT_SYMBOL_GPL(vsock_insert_connected);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 279)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 280) void vsock_remove_bound(struct vsock_sock *vsk)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 281) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 282) spin_lock_bh(&vsock_table_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 283) if (__vsock_in_bound_table(vsk))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 284) __vsock_remove_bound(vsk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 285) spin_unlock_bh(&vsock_table_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 286) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 287) EXPORT_SYMBOL_GPL(vsock_remove_bound);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 288)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 289) void vsock_remove_connected(struct vsock_sock *vsk)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 290) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 291) spin_lock_bh(&vsock_table_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 292) if (__vsock_in_connected_table(vsk))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 293) __vsock_remove_connected(vsk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 294) spin_unlock_bh(&vsock_table_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 295) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 296) EXPORT_SYMBOL_GPL(vsock_remove_connected);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 297)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 298) struct sock *vsock_find_bound_socket(struct sockaddr_vm *addr)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 299) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 300) struct sock *sk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 301)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 302) spin_lock_bh(&vsock_table_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 303) sk = __vsock_find_bound_socket(addr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 304) if (sk)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 305) sock_hold(sk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 306)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 307) spin_unlock_bh(&vsock_table_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 308)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 309) return sk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 310) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 311) EXPORT_SYMBOL_GPL(vsock_find_bound_socket);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 312)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 313) struct sock *vsock_find_connected_socket(struct sockaddr_vm *src,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 314) struct sockaddr_vm *dst)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 315) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 316) struct sock *sk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 317)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 318) spin_lock_bh(&vsock_table_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 319) sk = __vsock_find_connected_socket(src, dst);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 320) if (sk)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 321) sock_hold(sk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 322)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 323) spin_unlock_bh(&vsock_table_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 324)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 325) return sk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 326) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 327) EXPORT_SYMBOL_GPL(vsock_find_connected_socket);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 328)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 329) void vsock_remove_sock(struct vsock_sock *vsk)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 330) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 331) vsock_remove_bound(vsk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 332) vsock_remove_connected(vsk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 333) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 334) EXPORT_SYMBOL_GPL(vsock_remove_sock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 335)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 336) void vsock_for_each_connected_socket(void (*fn)(struct sock *sk))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 337) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 338) int i;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 339)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 340) spin_lock_bh(&vsock_table_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 341)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 342) for (i = 0; i < ARRAY_SIZE(vsock_connected_table); i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 343) struct vsock_sock *vsk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 344) list_for_each_entry(vsk, &vsock_connected_table[i],
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 345) connected_table)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 346) fn(sk_vsock(vsk));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 347) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 348)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 349) spin_unlock_bh(&vsock_table_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 350) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 351) EXPORT_SYMBOL_GPL(vsock_for_each_connected_socket);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 352)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 353) void vsock_add_pending(struct sock *listener, struct sock *pending)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 354) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 355) struct vsock_sock *vlistener;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 356) struct vsock_sock *vpending;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 357)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 358) vlistener = vsock_sk(listener);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 359) vpending = vsock_sk(pending);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 360)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 361) sock_hold(pending);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 362) sock_hold(listener);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 363) list_add_tail(&vpending->pending_links, &vlistener->pending_links);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 364) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 365) EXPORT_SYMBOL_GPL(vsock_add_pending);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 366)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 367) void vsock_remove_pending(struct sock *listener, struct sock *pending)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 368) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 369) struct vsock_sock *vpending = vsock_sk(pending);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 370)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 371) list_del_init(&vpending->pending_links);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 372) sock_put(listener);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 373) sock_put(pending);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 374) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 375) EXPORT_SYMBOL_GPL(vsock_remove_pending);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 376)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 377) void vsock_enqueue_accept(struct sock *listener, struct sock *connected)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 378) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 379) struct vsock_sock *vlistener;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 380) struct vsock_sock *vconnected;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 381)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 382) vlistener = vsock_sk(listener);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 383) vconnected = vsock_sk(connected);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 384)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 385) sock_hold(connected);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 386) sock_hold(listener);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 387) list_add_tail(&vconnected->accept_queue, &vlistener->accept_queue);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 388) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 389) EXPORT_SYMBOL_GPL(vsock_enqueue_accept);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 390)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 391) static bool vsock_use_local_transport(unsigned int remote_cid)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 392) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 393) if (!transport_local)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 394) return false;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 395)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 396) if (remote_cid == VMADDR_CID_LOCAL)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 397) return true;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 398)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 399) if (transport_g2h) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 400) return remote_cid == transport_g2h->get_local_cid();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 401) } else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 402) return remote_cid == VMADDR_CID_HOST;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 403) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 404) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 405)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 406) static void vsock_deassign_transport(struct vsock_sock *vsk)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 407) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 408) if (!vsk->transport)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 409) return;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 410)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 411) vsk->transport->destruct(vsk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 412) module_put(vsk->transport->module);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 413) vsk->transport = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 414) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 415)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 416) /* Assign a transport to a socket and call the .init transport callback.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 417) *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 418) * Note: for stream socket this must be called when vsk->remote_addr is set
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 419) * (e.g. during the connect() or when a connection request on a listener
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 420) * socket is received).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 421) * The vsk->remote_addr is used to decide which transport to use:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 422) * - remote CID == VMADDR_CID_LOCAL or g2h->local_cid or VMADDR_CID_HOST if
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 423) * g2h is not loaded, will use local transport;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 424) * - remote CID <= VMADDR_CID_HOST will use guest->host transport;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 425) * - remote CID > VMADDR_CID_HOST will use host->guest transport;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 426) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 427) int vsock_assign_transport(struct vsock_sock *vsk, struct vsock_sock *psk)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 428) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 429) const struct vsock_transport *new_transport;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 430) struct sock *sk = sk_vsock(vsk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 431) unsigned int remote_cid = vsk->remote_addr.svm_cid;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 432) int ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 433)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 434) switch (sk->sk_type) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 435) case SOCK_DGRAM:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 436) new_transport = transport_dgram;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 437) break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 438) case SOCK_STREAM:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 439) if (vsock_use_local_transport(remote_cid))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 440) new_transport = transport_local;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 441) else if (remote_cid <= VMADDR_CID_HOST || !transport_h2g)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 442) new_transport = transport_g2h;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 443) else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 444) new_transport = transport_h2g;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 445) break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 446) default:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 447) return -ESOCKTNOSUPPORT;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 448) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 449)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 450) if (vsk->transport) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 451) if (vsk->transport == new_transport)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 452) return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 453)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 454) /* transport->release() must be called with sock lock acquired.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 455) * This path can only be taken during vsock_stream_connect(),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 456) * where we have already held the sock lock.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 457) * In the other cases, this function is called on a new socket
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 458) * which is not assigned to any transport.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 459) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 460) vsk->transport->release(vsk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 461) vsock_deassign_transport(vsk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 462) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 463)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 464) /* We increase the module refcnt to prevent the transport unloading
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 465) * while there are open sockets assigned to it.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 466) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 467) if (!new_transport || !try_module_get(new_transport->module))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 468) return -ENODEV;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 469)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 470) ret = new_transport->init(vsk, psk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 471) if (ret) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 472) module_put(new_transport->module);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 473) return ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 474) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 475)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 476) vsk->transport = new_transport;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 477)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 478) return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 479) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 480) EXPORT_SYMBOL_GPL(vsock_assign_transport);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 481)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 482) bool vsock_find_cid(unsigned int cid)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 483) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 484) if (transport_g2h && cid == transport_g2h->get_local_cid())
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 485) return true;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 486)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 487) if (transport_h2g && cid == VMADDR_CID_HOST)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 488) return true;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 489)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 490) if (transport_local && cid == VMADDR_CID_LOCAL)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 491) return true;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 492)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 493) return false;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 494) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 495) EXPORT_SYMBOL_GPL(vsock_find_cid);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 496)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 497) static struct sock *vsock_dequeue_accept(struct sock *listener)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 498) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 499) struct vsock_sock *vlistener;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 500) struct vsock_sock *vconnected;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 501)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 502) vlistener = vsock_sk(listener);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 503)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 504) if (list_empty(&vlistener->accept_queue))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 505) return NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 506)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 507) vconnected = list_entry(vlistener->accept_queue.next,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 508) struct vsock_sock, accept_queue);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 509)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 510) list_del_init(&vconnected->accept_queue);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 511) sock_put(listener);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 512) /* The caller will need a reference on the connected socket so we let
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 513) * it call sock_put().
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 514) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 515)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 516) return sk_vsock(vconnected);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 517) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 518)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 519) static bool vsock_is_accept_queue_empty(struct sock *sk)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 520) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 521) struct vsock_sock *vsk = vsock_sk(sk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 522) return list_empty(&vsk->accept_queue);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 523) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 524)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 525) static bool vsock_is_pending(struct sock *sk)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 526) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 527) struct vsock_sock *vsk = vsock_sk(sk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 528) return !list_empty(&vsk->pending_links);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 529) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 530)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 531) static int vsock_send_shutdown(struct sock *sk, int mode)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 532) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 533) struct vsock_sock *vsk = vsock_sk(sk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 534)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 535) if (!vsk->transport)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 536) return -ENODEV;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 537)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 538) return vsk->transport->shutdown(vsk, mode);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 539) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 540)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 541) static void vsock_pending_work(struct work_struct *work)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 542) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 543) struct sock *sk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 544) struct sock *listener;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 545) struct vsock_sock *vsk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 546) bool cleanup;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 547)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 548) vsk = container_of(work, struct vsock_sock, pending_work.work);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 549) sk = sk_vsock(vsk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 550) listener = vsk->listener;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 551) cleanup = true;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 552)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 553) lock_sock(listener);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 554) lock_sock_nested(sk, SINGLE_DEPTH_NESTING);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 555)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 556) if (vsock_is_pending(sk)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 557) vsock_remove_pending(listener, sk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 558)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 559) sk_acceptq_removed(listener);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 560) } else if (!vsk->rejected) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 561) /* We are not on the pending list and accept() did not reject
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 562) * us, so we must have been accepted by our user process. We
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 563) * just need to drop our references to the sockets and be on
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 564) * our way.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 565) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 566) cleanup = false;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 567) goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 568) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 569)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 570) /* We need to remove ourself from the global connected sockets list so
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 571) * incoming packets can't find this socket, and to reduce the reference
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 572) * count.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 573) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 574) vsock_remove_connected(vsk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 575)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 576) sk->sk_state = TCP_CLOSE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 577)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 578) out:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 579) release_sock(sk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 580) release_sock(listener);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 581) if (cleanup)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 582) sock_put(sk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 583)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 584) sock_put(sk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 585) sock_put(listener);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 586) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 587)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 588) /**** SOCKET OPERATIONS ****/
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 589)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 590) static int __vsock_bind_stream(struct vsock_sock *vsk,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 591) struct sockaddr_vm *addr)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 592) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 593) static u32 port;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 594) struct sockaddr_vm new_addr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 595)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 596) if (!port)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 597) port = LAST_RESERVED_PORT + 1 +
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 598) prandom_u32_max(U32_MAX - LAST_RESERVED_PORT);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 599)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 600) vsock_addr_init(&new_addr, addr->svm_cid, addr->svm_port);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 601)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 602) if (addr->svm_port == VMADDR_PORT_ANY) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 603) bool found = false;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 604) unsigned int i;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 605)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 606) for (i = 0; i < MAX_PORT_RETRIES; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 607) if (port <= LAST_RESERVED_PORT)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 608) port = LAST_RESERVED_PORT + 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 609)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 610) new_addr.svm_port = port++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 611)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 612) if (!__vsock_find_bound_socket(&new_addr)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 613) found = true;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 614) break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 615) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 616) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 617)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 618) if (!found)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 619) return -EADDRNOTAVAIL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 620) } else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 621) /* If port is in reserved range, ensure caller
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 622) * has necessary privileges.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 623) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 624) if (addr->svm_port <= LAST_RESERVED_PORT &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 625) !capable(CAP_NET_BIND_SERVICE)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 626) return -EACCES;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 627) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 628)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 629) if (__vsock_find_bound_socket(&new_addr))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 630) return -EADDRINUSE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 631) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 632)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 633) vsock_addr_init(&vsk->local_addr, new_addr.svm_cid, new_addr.svm_port);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 634)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 635) /* Remove stream sockets from the unbound list and add them to the hash
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 636) * table for easy lookup by its address. The unbound list is simply an
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 637) * extra entry at the end of the hash table, a trick used by AF_UNIX.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 638) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 639) __vsock_remove_bound(vsk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 640) __vsock_insert_bound(vsock_bound_sockets(&vsk->local_addr), vsk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 641)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 642) return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 643) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 644)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 645) static int __vsock_bind_dgram(struct vsock_sock *vsk,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 646) struct sockaddr_vm *addr)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 647) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 648) return vsk->transport->dgram_bind(vsk, addr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 649) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 650)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 651) static int __vsock_bind(struct sock *sk, struct sockaddr_vm *addr)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 652) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 653) struct vsock_sock *vsk = vsock_sk(sk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 654) int retval;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 655)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 656) /* First ensure this socket isn't already bound. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 657) if (vsock_addr_bound(&vsk->local_addr))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 658) return -EINVAL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 659)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 660) /* Now bind to the provided address or select appropriate values if
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 661) * none are provided (VMADDR_CID_ANY and VMADDR_PORT_ANY). Note that
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 662) * like AF_INET prevents binding to a non-local IP address (in most
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 663) * cases), we only allow binding to a local CID.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 664) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 665) if (addr->svm_cid != VMADDR_CID_ANY && !vsock_find_cid(addr->svm_cid))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 666) return -EADDRNOTAVAIL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 667)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 668) switch (sk->sk_socket->type) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 669) case SOCK_STREAM:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 670) spin_lock_bh(&vsock_table_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 671) retval = __vsock_bind_stream(vsk, addr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 672) spin_unlock_bh(&vsock_table_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 673) break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 674)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 675) case SOCK_DGRAM:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 676) retval = __vsock_bind_dgram(vsk, addr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 677) break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 678)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 679) default:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 680) retval = -EINVAL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 681) break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 682) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 683)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 684) return retval;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 685) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 686)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 687) static void vsock_connect_timeout(struct work_struct *work);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 688)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 689) static struct sock *__vsock_create(struct net *net,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 690) struct socket *sock,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 691) struct sock *parent,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 692) gfp_t priority,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 693) unsigned short type,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 694) int kern)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 695) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 696) struct sock *sk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 697) struct vsock_sock *psk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 698) struct vsock_sock *vsk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 699)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 700) sk = sk_alloc(net, AF_VSOCK, priority, &vsock_proto, kern);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 701) if (!sk)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 702) return NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 703)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 704) sock_init_data(sock, sk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 705)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 706) /* sk->sk_type is normally set in sock_init_data, but only if sock is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 707) * non-NULL. We make sure that our sockets always have a type by
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 708) * setting it here if needed.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 709) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 710) if (!sock)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 711) sk->sk_type = type;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 712)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 713) vsk = vsock_sk(sk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 714) vsock_addr_init(&vsk->local_addr, VMADDR_CID_ANY, VMADDR_PORT_ANY);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 715) vsock_addr_init(&vsk->remote_addr, VMADDR_CID_ANY, VMADDR_PORT_ANY);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 716)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 717) sk->sk_destruct = vsock_sk_destruct;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 718) sk->sk_backlog_rcv = vsock_queue_rcv_skb;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 719) sock_reset_flag(sk, SOCK_DONE);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 720)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 721) INIT_LIST_HEAD(&vsk->bound_table);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 722) INIT_LIST_HEAD(&vsk->connected_table);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 723) vsk->listener = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 724) INIT_LIST_HEAD(&vsk->pending_links);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 725) INIT_LIST_HEAD(&vsk->accept_queue);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 726) vsk->rejected = false;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 727) vsk->sent_request = false;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 728) vsk->ignore_connecting_rst = false;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 729) vsk->peer_shutdown = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 730) INIT_DELAYED_WORK(&vsk->connect_work, vsock_connect_timeout);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 731) INIT_DELAYED_WORK(&vsk->pending_work, vsock_pending_work);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 732)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 733) psk = parent ? vsock_sk(parent) : NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 734) if (parent) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 735) vsk->trusted = psk->trusted;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 736) vsk->owner = get_cred(psk->owner);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 737) vsk->connect_timeout = psk->connect_timeout;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 738) vsk->buffer_size = psk->buffer_size;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 739) vsk->buffer_min_size = psk->buffer_min_size;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 740) vsk->buffer_max_size = psk->buffer_max_size;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 741) security_sk_clone(parent, sk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 742) } else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 743) vsk->trusted = ns_capable_noaudit(&init_user_ns, CAP_NET_ADMIN);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 744) vsk->owner = get_current_cred();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 745) vsk->connect_timeout = VSOCK_DEFAULT_CONNECT_TIMEOUT;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 746) vsk->buffer_size = VSOCK_DEFAULT_BUFFER_SIZE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 747) vsk->buffer_min_size = VSOCK_DEFAULT_BUFFER_MIN_SIZE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 748) vsk->buffer_max_size = VSOCK_DEFAULT_BUFFER_MAX_SIZE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 749) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 750)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 751) return sk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 752) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 753)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 754) static void __vsock_release(struct sock *sk, int level)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 755) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 756) if (sk) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 757) struct sock *pending;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 758) struct vsock_sock *vsk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 759)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 760) vsk = vsock_sk(sk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 761) pending = NULL; /* Compiler warning. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 762)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 763) /* When "level" is SINGLE_DEPTH_NESTING, use the nested
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 764) * version to avoid the warning "possible recursive locking
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 765) * detected". When "level" is 0, lock_sock_nested(sk, level)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 766) * is the same as lock_sock(sk).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 767) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 768) lock_sock_nested(sk, level);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 769)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 770) if (vsk->transport)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 771) vsk->transport->release(vsk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 772) else if (sk->sk_type == SOCK_STREAM)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 773) vsock_remove_sock(vsk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 774)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 775) sock_orphan(sk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 776) sk->sk_shutdown = SHUTDOWN_MASK;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 777)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 778) skb_queue_purge(&sk->sk_receive_queue);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 779)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 780) /* Clean up any sockets that never were accepted. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 781) while ((pending = vsock_dequeue_accept(sk)) != NULL) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 782) __vsock_release(pending, SINGLE_DEPTH_NESTING);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 783) sock_put(pending);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 784) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 785)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 786) release_sock(sk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 787) sock_put(sk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 788) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 789) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 790)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 791) static void vsock_sk_destruct(struct sock *sk)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 792) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 793) struct vsock_sock *vsk = vsock_sk(sk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 794)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 795) vsock_deassign_transport(vsk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 796)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 797) /* When clearing these addresses, there's no need to set the family and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 798) * possibly register the address family with the kernel.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 799) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 800) vsock_addr_init(&vsk->local_addr, VMADDR_CID_ANY, VMADDR_PORT_ANY);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 801) vsock_addr_init(&vsk->remote_addr, VMADDR_CID_ANY, VMADDR_PORT_ANY);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 802)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 803) put_cred(vsk->owner);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 804) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 805)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 806) static int vsock_queue_rcv_skb(struct sock *sk, struct sk_buff *skb)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 807) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 808) int err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 809)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 810) err = sock_queue_rcv_skb(sk, skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 811) if (err)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 812) kfree_skb(skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 813)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 814) return err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 815) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 816)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 817) struct sock *vsock_create_connected(struct sock *parent)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 818) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 819) return __vsock_create(sock_net(parent), NULL, parent, GFP_KERNEL,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 820) parent->sk_type, 0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 821) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 822) EXPORT_SYMBOL_GPL(vsock_create_connected);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 823)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 824) s64 vsock_stream_has_data(struct vsock_sock *vsk)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 825) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 826) return vsk->transport->stream_has_data(vsk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 827) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 828) EXPORT_SYMBOL_GPL(vsock_stream_has_data);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 829)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 830) s64 vsock_stream_has_space(struct vsock_sock *vsk)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 831) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 832) return vsk->transport->stream_has_space(vsk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 833) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 834) EXPORT_SYMBOL_GPL(vsock_stream_has_space);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 835)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 836) static int vsock_release(struct socket *sock)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 837) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 838) __vsock_release(sock->sk, 0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 839) sock->sk = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 840) sock->state = SS_FREE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 841)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 842) return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 843) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 844)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 845) static int
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 846) vsock_bind(struct socket *sock, struct sockaddr *addr, int addr_len)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 847) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 848) int err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 849) struct sock *sk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 850) struct sockaddr_vm *vm_addr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 851)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 852) sk = sock->sk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 853)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 854) if (vsock_addr_cast(addr, addr_len, &vm_addr) != 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 855) return -EINVAL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 856)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 857) lock_sock(sk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 858) err = __vsock_bind(sk, vm_addr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 859) release_sock(sk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 860)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 861) return err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 862) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 863)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 864) static int vsock_getname(struct socket *sock,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 865) struct sockaddr *addr, int peer)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 866) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 867) int err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 868) struct sock *sk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 869) struct vsock_sock *vsk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 870) struct sockaddr_vm *vm_addr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 871)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 872) sk = sock->sk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 873) vsk = vsock_sk(sk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 874) err = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 875)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 876) lock_sock(sk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 877)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 878) if (peer) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 879) if (sock->state != SS_CONNECTED) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 880) err = -ENOTCONN;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 881) goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 882) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 883) vm_addr = &vsk->remote_addr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 884) } else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 885) vm_addr = &vsk->local_addr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 886) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 887)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 888) if (!vm_addr) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 889) err = -EINVAL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 890) goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 891) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 892)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 893) /* sys_getsockname() and sys_getpeername() pass us a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 894) * MAX_SOCK_ADDR-sized buffer and don't set addr_len. Unfortunately
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 895) * that macro is defined in socket.c instead of .h, so we hardcode its
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 896) * value here.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 897) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 898) BUILD_BUG_ON(sizeof(*vm_addr) > 128);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 899) memcpy(addr, vm_addr, sizeof(*vm_addr));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 900) err = sizeof(*vm_addr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 901)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 902) out:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 903) release_sock(sk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 904) return err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 905) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 906)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 907) static int vsock_shutdown(struct socket *sock, int mode)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 908) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 909) int err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 910) struct sock *sk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 911)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 912) /* User level uses SHUT_RD (0) and SHUT_WR (1), but the kernel uses
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 913) * RCV_SHUTDOWN (1) and SEND_SHUTDOWN (2), so we must increment mode
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 914) * here like the other address families do. Note also that the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 915) * increment makes SHUT_RDWR (2) into RCV_SHUTDOWN | SEND_SHUTDOWN (3),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 916) * which is what we want.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 917) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 918) mode++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 919)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 920) if ((mode & ~SHUTDOWN_MASK) || !mode)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 921) return -EINVAL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 922)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 923) /* If this is a STREAM socket and it is not connected then bail out
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 924) * immediately. If it is a DGRAM socket then we must first kick the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 925) * socket so that it wakes up from any sleeping calls, for example
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 926) * recv(), and then afterwards return the error.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 927) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 928)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 929) sk = sock->sk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 930)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 931) lock_sock(sk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 932) if (sock->state == SS_UNCONNECTED) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 933) err = -ENOTCONN;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 934) if (sk->sk_type == SOCK_STREAM)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 935) goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 936) } else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 937) sock->state = SS_DISCONNECTING;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 938) err = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 939) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 940)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 941) /* Receive and send shutdowns are treated alike. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 942) mode = mode & (RCV_SHUTDOWN | SEND_SHUTDOWN);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 943) if (mode) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 944) sk->sk_shutdown |= mode;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 945) sk->sk_state_change(sk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 946)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 947) if (sk->sk_type == SOCK_STREAM) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 948) sock_reset_flag(sk, SOCK_DONE);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 949) vsock_send_shutdown(sk, mode);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 950) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 951) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 952)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 953) out:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 954) release_sock(sk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 955) return err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 956) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 957)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 958) static __poll_t vsock_poll(struct file *file, struct socket *sock,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 959) poll_table *wait)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 960) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 961) struct sock *sk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 962) __poll_t mask;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 963) struct vsock_sock *vsk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 964)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 965) sk = sock->sk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 966) vsk = vsock_sk(sk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 967)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 968) poll_wait(file, sk_sleep(sk), wait);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 969) mask = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 970)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 971) if (sk->sk_err)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 972) /* Signify that there has been an error on this socket. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 973) mask |= EPOLLERR;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 974)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 975) /* INET sockets treat local write shutdown and peer write shutdown as a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 976) * case of EPOLLHUP set.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 977) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 978) if ((sk->sk_shutdown == SHUTDOWN_MASK) ||
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 979) ((sk->sk_shutdown & SEND_SHUTDOWN) &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 980) (vsk->peer_shutdown & SEND_SHUTDOWN))) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 981) mask |= EPOLLHUP;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 982) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 983)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 984) if (sk->sk_shutdown & RCV_SHUTDOWN ||
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 985) vsk->peer_shutdown & SEND_SHUTDOWN) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 986) mask |= EPOLLRDHUP;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 987) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 988)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 989) if (sock->type == SOCK_DGRAM) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 990) /* For datagram sockets we can read if there is something in
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 991) * the queue and write as long as the socket isn't shutdown for
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 992) * sending.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 993) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 994) if (!skb_queue_empty_lockless(&sk->sk_receive_queue) ||
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 995) (sk->sk_shutdown & RCV_SHUTDOWN)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 996) mask |= EPOLLIN | EPOLLRDNORM;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 997) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 998)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 999) if (!(sk->sk_shutdown & SEND_SHUTDOWN))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1000) mask |= EPOLLOUT | EPOLLWRNORM | EPOLLWRBAND;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1001)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1002) } else if (sock->type == SOCK_STREAM) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1003) const struct vsock_transport *transport;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1004)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1005) lock_sock(sk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1006)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1007) transport = vsk->transport;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1008)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1009) /* Listening sockets that have connections in their accept
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1010) * queue can be read.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1011) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1012) if (sk->sk_state == TCP_LISTEN
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1013) && !vsock_is_accept_queue_empty(sk))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1014) mask |= EPOLLIN | EPOLLRDNORM;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1015)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1016) /* If there is something in the queue then we can read. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1017) if (transport && transport->stream_is_active(vsk) &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1018) !(sk->sk_shutdown & RCV_SHUTDOWN)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1019) bool data_ready_now = false;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1020) int ret = transport->notify_poll_in(
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1021) vsk, 1, &data_ready_now);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1022) if (ret < 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1023) mask |= EPOLLERR;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1024) } else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1025) if (data_ready_now)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1026) mask |= EPOLLIN | EPOLLRDNORM;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1027)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1028) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1029) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1030)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1031) /* Sockets whose connections have been closed, reset, or
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1032) * terminated should also be considered read, and we check the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1033) * shutdown flag for that.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1034) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1035) if (sk->sk_shutdown & RCV_SHUTDOWN ||
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1036) vsk->peer_shutdown & SEND_SHUTDOWN) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1037) mask |= EPOLLIN | EPOLLRDNORM;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1038) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1039)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1040) /* Connected sockets that can produce data can be written. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1041) if (transport && sk->sk_state == TCP_ESTABLISHED) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1042) if (!(sk->sk_shutdown & SEND_SHUTDOWN)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1043) bool space_avail_now = false;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1044) int ret = transport->notify_poll_out(
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1045) vsk, 1, &space_avail_now);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1046) if (ret < 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1047) mask |= EPOLLERR;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1048) } else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1049) if (space_avail_now)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1050) /* Remove EPOLLWRBAND since INET
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1051) * sockets are not setting it.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1052) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1053) mask |= EPOLLOUT | EPOLLWRNORM;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1054)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1055) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1056) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1057) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1058)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1059) /* Simulate INET socket poll behaviors, which sets
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1060) * EPOLLOUT|EPOLLWRNORM when peer is closed and nothing to read,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1061) * but local send is not shutdown.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1062) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1063) if (sk->sk_state == TCP_CLOSE || sk->sk_state == TCP_CLOSING) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1064) if (!(sk->sk_shutdown & SEND_SHUTDOWN))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1065) mask |= EPOLLOUT | EPOLLWRNORM;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1066)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1067) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1068)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1069) release_sock(sk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1070) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1071)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1072) return mask;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1073) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1074)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1075) static int vsock_dgram_sendmsg(struct socket *sock, struct msghdr *msg,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1076) size_t len)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1077) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1078) int err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1079) struct sock *sk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1080) struct vsock_sock *vsk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1081) struct sockaddr_vm *remote_addr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1082) const struct vsock_transport *transport;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1083)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1084) if (msg->msg_flags & MSG_OOB)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1085) return -EOPNOTSUPP;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1086)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1087) /* For now, MSG_DONTWAIT is always assumed... */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1088) err = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1089) sk = sock->sk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1090) vsk = vsock_sk(sk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1091)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1092) lock_sock(sk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1093)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1094) transport = vsk->transport;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1095)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1096) err = vsock_auto_bind(vsk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1097) if (err)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1098) goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1099)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1100)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1101) /* If the provided message contains an address, use that. Otherwise
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1102) * fall back on the socket's remote handle (if it has been connected).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1103) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1104) if (msg->msg_name &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1105) vsock_addr_cast(msg->msg_name, msg->msg_namelen,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1106) &remote_addr) == 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1107) /* Ensure this address is of the right type and is a valid
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1108) * destination.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1109) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1110)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1111) if (remote_addr->svm_cid == VMADDR_CID_ANY)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1112) remote_addr->svm_cid = transport->get_local_cid();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1113)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1114) if (!vsock_addr_bound(remote_addr)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1115) err = -EINVAL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1116) goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1117) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1118) } else if (sock->state == SS_CONNECTED) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1119) remote_addr = &vsk->remote_addr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1120)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1121) if (remote_addr->svm_cid == VMADDR_CID_ANY)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1122) remote_addr->svm_cid = transport->get_local_cid();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1123)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1124) /* XXX Should connect() or this function ensure remote_addr is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1125) * bound?
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1126) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1127) if (!vsock_addr_bound(&vsk->remote_addr)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1128) err = -EINVAL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1129) goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1130) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1131) } else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1132) err = -EINVAL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1133) goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1134) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1135)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1136) if (!transport->dgram_allow(remote_addr->svm_cid,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1137) remote_addr->svm_port)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1138) err = -EINVAL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1139) goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1140) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1141)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1142) err = transport->dgram_enqueue(vsk, remote_addr, msg, len);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1143)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1144) out:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1145) release_sock(sk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1146) return err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1147) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1148)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1149) static int vsock_dgram_connect(struct socket *sock,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1150) struct sockaddr *addr, int addr_len, int flags)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1151) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1152) int err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1153) struct sock *sk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1154) struct vsock_sock *vsk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1155) struct sockaddr_vm *remote_addr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1156)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1157) sk = sock->sk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1158) vsk = vsock_sk(sk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1159)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1160) err = vsock_addr_cast(addr, addr_len, &remote_addr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1161) if (err == -EAFNOSUPPORT && remote_addr->svm_family == AF_UNSPEC) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1162) lock_sock(sk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1163) vsock_addr_init(&vsk->remote_addr, VMADDR_CID_ANY,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1164) VMADDR_PORT_ANY);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1165) sock->state = SS_UNCONNECTED;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1166) release_sock(sk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1167) return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1168) } else if (err != 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1169) return -EINVAL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1170)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1171) lock_sock(sk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1172)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1173) err = vsock_auto_bind(vsk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1174) if (err)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1175) goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1176)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1177) if (!vsk->transport->dgram_allow(remote_addr->svm_cid,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1178) remote_addr->svm_port)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1179) err = -EINVAL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1180) goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1181) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1182)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1183) memcpy(&vsk->remote_addr, remote_addr, sizeof(vsk->remote_addr));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1184) sock->state = SS_CONNECTED;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1185)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1186) out:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1187) release_sock(sk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1188) return err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1189) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1190)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1191) static int vsock_dgram_recvmsg(struct socket *sock, struct msghdr *msg,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1192) size_t len, int flags)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1193) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1194) struct vsock_sock *vsk = vsock_sk(sock->sk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1195)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1196) return vsk->transport->dgram_dequeue(vsk, msg, len, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1197) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1198)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1199) static const struct proto_ops vsock_dgram_ops = {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1200) .family = PF_VSOCK,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1201) .owner = THIS_MODULE,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1202) .release = vsock_release,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1203) .bind = vsock_bind,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1204) .connect = vsock_dgram_connect,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1205) .socketpair = sock_no_socketpair,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1206) .accept = sock_no_accept,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1207) .getname = vsock_getname,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1208) .poll = vsock_poll,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1209) .ioctl = sock_no_ioctl,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1210) .listen = sock_no_listen,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1211) .shutdown = vsock_shutdown,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1212) .sendmsg = vsock_dgram_sendmsg,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1213) .recvmsg = vsock_dgram_recvmsg,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1214) .mmap = sock_no_mmap,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1215) .sendpage = sock_no_sendpage,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1216) };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1217)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1218) static int vsock_transport_cancel_pkt(struct vsock_sock *vsk)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1219) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1220) const struct vsock_transport *transport = vsk->transport;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1221)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1222) if (!transport || !transport->cancel_pkt)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1223) return -EOPNOTSUPP;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1224)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1225) return transport->cancel_pkt(vsk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1226) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1227)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1228) static void vsock_connect_timeout(struct work_struct *work)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1229) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1230) struct sock *sk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1231) struct vsock_sock *vsk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1232)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1233) vsk = container_of(work, struct vsock_sock, connect_work.work);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1234) sk = sk_vsock(vsk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1235)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1236) lock_sock(sk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1237) if (sk->sk_state == TCP_SYN_SENT &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1238) (sk->sk_shutdown != SHUTDOWN_MASK)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1239) sk->sk_state = TCP_CLOSE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1240) sk->sk_err = ETIMEDOUT;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1241) sk->sk_error_report(sk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1242) vsock_transport_cancel_pkt(vsk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1243) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1244) release_sock(sk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1245)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1246) sock_put(sk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1247) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1248)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1249) static int vsock_stream_connect(struct socket *sock, struct sockaddr *addr,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1250) int addr_len, int flags)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1251) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1252) int err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1253) struct sock *sk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1254) struct vsock_sock *vsk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1255) const struct vsock_transport *transport;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1256) struct sockaddr_vm *remote_addr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1257) long timeout;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1258) DEFINE_WAIT(wait);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1259)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1260) err = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1261) sk = sock->sk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1262) vsk = vsock_sk(sk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1263)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1264) lock_sock(sk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1265)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1266) /* XXX AF_UNSPEC should make us disconnect like AF_INET. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1267) switch (sock->state) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1268) case SS_CONNECTED:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1269) err = -EISCONN;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1270) goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1271) case SS_DISCONNECTING:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1272) err = -EINVAL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1273) goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1274) case SS_CONNECTING:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1275) /* This continues on so we can move sock into the SS_CONNECTED
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1276) * state once the connection has completed (at which point err
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1277) * will be set to zero also). Otherwise, we will either wait
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1278) * for the connection or return -EALREADY should this be a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1279) * non-blocking call.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1280) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1281) err = -EALREADY;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1282) if (flags & O_NONBLOCK)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1283) goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1284) break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1285) default:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1286) if ((sk->sk_state == TCP_LISTEN) ||
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1287) vsock_addr_cast(addr, addr_len, &remote_addr) != 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1288) err = -EINVAL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1289) goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1290) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1291)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1292) /* Set the remote address that we are connecting to. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1293) memcpy(&vsk->remote_addr, remote_addr,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1294) sizeof(vsk->remote_addr));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1295)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1296) err = vsock_assign_transport(vsk, NULL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1297) if (err)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1298) goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1299)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1300) transport = vsk->transport;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1301)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1302) /* The hypervisor and well-known contexts do not have socket
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1303) * endpoints.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1304) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1305) if (!transport ||
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1306) !transport->stream_allow(remote_addr->svm_cid,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1307) remote_addr->svm_port)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1308) err = -ENETUNREACH;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1309) goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1310) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1311)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1312) err = vsock_auto_bind(vsk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1313) if (err)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1314) goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1315)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1316) sk->sk_state = TCP_SYN_SENT;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1317)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1318) err = transport->connect(vsk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1319) if (err < 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1320) goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1321)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1322) /* Mark sock as connecting and set the error code to in
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1323) * progress in case this is a non-blocking connect.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1324) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1325) sock->state = SS_CONNECTING;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1326) err = -EINPROGRESS;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1327) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1328)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1329) /* The receive path will handle all communication until we are able to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1330) * enter the connected state. Here we wait for the connection to be
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1331) * completed or a notification of an error.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1332) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1333) timeout = vsk->connect_timeout;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1334) prepare_to_wait(sk_sleep(sk), &wait, TASK_INTERRUPTIBLE);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1335)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1336) while (sk->sk_state != TCP_ESTABLISHED && sk->sk_err == 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1337) if (flags & O_NONBLOCK) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1338) /* If we're not going to block, we schedule a timeout
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1339) * function to generate a timeout on the connection
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1340) * attempt, in case the peer doesn't respond in a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1341) * timely manner. We hold on to the socket until the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1342) * timeout fires.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1343) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1344) sock_hold(sk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1345) schedule_delayed_work(&vsk->connect_work, timeout);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1346)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1347) /* Skip ahead to preserve error code set above. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1348) goto out_wait;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1349) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1350)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1351) release_sock(sk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1352) timeout = schedule_timeout(timeout);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1353) lock_sock(sk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1354)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1355) if (signal_pending(current)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1356) err = sock_intr_errno(timeout);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1357) sk->sk_state = sk->sk_state == TCP_ESTABLISHED ? TCP_CLOSING : TCP_CLOSE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1358) sock->state = SS_UNCONNECTED;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1359) vsock_transport_cancel_pkt(vsk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1360) vsock_remove_connected(vsk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1361) goto out_wait;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1362) } else if (timeout == 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1363) err = -ETIMEDOUT;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1364) sk->sk_state = TCP_CLOSE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1365) sock->state = SS_UNCONNECTED;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1366) vsock_transport_cancel_pkt(vsk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1367) goto out_wait;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1368) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1369)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1370) prepare_to_wait(sk_sleep(sk), &wait, TASK_INTERRUPTIBLE);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1371) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1372)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1373) if (sk->sk_err) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1374) err = -sk->sk_err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1375) sk->sk_state = TCP_CLOSE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1376) sock->state = SS_UNCONNECTED;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1377) } else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1378) err = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1379) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1380)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1381) out_wait:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1382) finish_wait(sk_sleep(sk), &wait);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1383) out:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1384) release_sock(sk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1385) return err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1386) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1387)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1388) static int vsock_accept(struct socket *sock, struct socket *newsock, int flags,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1389) bool kern)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1390) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1391) struct sock *listener;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1392) int err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1393) struct sock *connected;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1394) struct vsock_sock *vconnected;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1395) long timeout;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1396) DEFINE_WAIT(wait);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1397)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1398) err = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1399) listener = sock->sk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1400)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1401) lock_sock(listener);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1402)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1403) if (sock->type != SOCK_STREAM) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1404) err = -EOPNOTSUPP;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1405) goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1406) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1407)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1408) if (listener->sk_state != TCP_LISTEN) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1409) err = -EINVAL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1410) goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1411) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1412)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1413) /* Wait for children sockets to appear; these are the new sockets
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1414) * created upon connection establishment.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1415) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1416) timeout = sock_rcvtimeo(listener, flags & O_NONBLOCK);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1417) prepare_to_wait(sk_sleep(listener), &wait, TASK_INTERRUPTIBLE);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1418)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1419) while ((connected = vsock_dequeue_accept(listener)) == NULL &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1420) listener->sk_err == 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1421) release_sock(listener);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1422) timeout = schedule_timeout(timeout);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1423) finish_wait(sk_sleep(listener), &wait);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1424) lock_sock(listener);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1425)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1426) if (signal_pending(current)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1427) err = sock_intr_errno(timeout);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1428) goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1429) } else if (timeout == 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1430) err = -EAGAIN;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1431) goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1432) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1433)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1434) prepare_to_wait(sk_sleep(listener), &wait, TASK_INTERRUPTIBLE);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1435) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1436) finish_wait(sk_sleep(listener), &wait);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1437)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1438) if (listener->sk_err)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1439) err = -listener->sk_err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1440)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1441) if (connected) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1442) sk_acceptq_removed(listener);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1443)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1444) lock_sock_nested(connected, SINGLE_DEPTH_NESTING);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1445) vconnected = vsock_sk(connected);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1446)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1447) /* If the listener socket has received an error, then we should
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1448) * reject this socket and return. Note that we simply mark the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1449) * socket rejected, drop our reference, and let the cleanup
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1450) * function handle the cleanup; the fact that we found it in
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1451) * the listener's accept queue guarantees that the cleanup
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1452) * function hasn't run yet.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1453) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1454) if (err) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1455) vconnected->rejected = true;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1456) } else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1457) newsock->state = SS_CONNECTED;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1458) sock_graft(connected, newsock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1459) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1460)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1461) release_sock(connected);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1462) sock_put(connected);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1463) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1464)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1465) out:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1466) release_sock(listener);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1467) return err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1468) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1469)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1470) static int vsock_listen(struct socket *sock, int backlog)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1471) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1472) int err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1473) struct sock *sk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1474) struct vsock_sock *vsk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1475)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1476) sk = sock->sk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1477)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1478) lock_sock(sk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1479)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1480) if (sock->type != SOCK_STREAM) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1481) err = -EOPNOTSUPP;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1482) goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1483) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1484)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1485) if (sock->state != SS_UNCONNECTED) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1486) err = -EINVAL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1487) goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1488) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1489)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1490) vsk = vsock_sk(sk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1491)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1492) if (!vsock_addr_bound(&vsk->local_addr)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1493) err = -EINVAL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1494) goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1495) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1496)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1497) sk->sk_max_ack_backlog = backlog;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1498) sk->sk_state = TCP_LISTEN;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1499)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1500) err = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1501)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1502) out:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1503) release_sock(sk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1504) return err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1505) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1506)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1507) static void vsock_update_buffer_size(struct vsock_sock *vsk,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1508) const struct vsock_transport *transport,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1509) u64 val)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1510) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1511) if (val > vsk->buffer_max_size)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1512) val = vsk->buffer_max_size;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1513)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1514) if (val < vsk->buffer_min_size)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1515) val = vsk->buffer_min_size;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1516)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1517) if (val != vsk->buffer_size &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1518) transport && transport->notify_buffer_size)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1519) transport->notify_buffer_size(vsk, &val);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1520)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1521) vsk->buffer_size = val;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1522) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1523)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1524) static int vsock_stream_setsockopt(struct socket *sock,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1525) int level,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1526) int optname,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1527) sockptr_t optval,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1528) unsigned int optlen)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1529) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1530) int err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1531) struct sock *sk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1532) struct vsock_sock *vsk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1533) const struct vsock_transport *transport;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1534) u64 val;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1535)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1536) if (level != AF_VSOCK)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1537) return -ENOPROTOOPT;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1538)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1539) #define COPY_IN(_v) \
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1540) do { \
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1541) if (optlen < sizeof(_v)) { \
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1542) err = -EINVAL; \
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1543) goto exit; \
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1544) } \
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1545) if (copy_from_sockptr(&_v, optval, sizeof(_v)) != 0) { \
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1546) err = -EFAULT; \
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1547) goto exit; \
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1548) } \
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1549) } while (0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1550)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1551) err = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1552) sk = sock->sk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1553) vsk = vsock_sk(sk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1554)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1555) lock_sock(sk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1556)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1557) transport = vsk->transport;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1558)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1559) switch (optname) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1560) case SO_VM_SOCKETS_BUFFER_SIZE:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1561) COPY_IN(val);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1562) vsock_update_buffer_size(vsk, transport, val);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1563) break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1564)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1565) case SO_VM_SOCKETS_BUFFER_MAX_SIZE:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1566) COPY_IN(val);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1567) vsk->buffer_max_size = val;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1568) vsock_update_buffer_size(vsk, transport, vsk->buffer_size);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1569) break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1570)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1571) case SO_VM_SOCKETS_BUFFER_MIN_SIZE:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1572) COPY_IN(val);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1573) vsk->buffer_min_size = val;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1574) vsock_update_buffer_size(vsk, transport, vsk->buffer_size);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1575) break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1576)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1577) case SO_VM_SOCKETS_CONNECT_TIMEOUT: {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1578) struct __kernel_old_timeval tv;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1579) COPY_IN(tv);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1580) if (tv.tv_sec >= 0 && tv.tv_usec < USEC_PER_SEC &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1581) tv.tv_sec < (MAX_SCHEDULE_TIMEOUT / HZ - 1)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1582) vsk->connect_timeout = tv.tv_sec * HZ +
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1583) DIV_ROUND_UP(tv.tv_usec, (1000000 / HZ));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1584) if (vsk->connect_timeout == 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1585) vsk->connect_timeout =
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1586) VSOCK_DEFAULT_CONNECT_TIMEOUT;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1587)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1588) } else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1589) err = -ERANGE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1590) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1591) break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1592) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1593)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1594) default:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1595) err = -ENOPROTOOPT;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1596) break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1597) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1598)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1599) #undef COPY_IN
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1600)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1601) exit:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1602) release_sock(sk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1603) return err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1604) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1605)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1606) static int vsock_stream_getsockopt(struct socket *sock,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1607) int level, int optname,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1608) char __user *optval,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1609) int __user *optlen)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1610) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1611) int err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1612) int len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1613) struct sock *sk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1614) struct vsock_sock *vsk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1615) u64 val;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1616)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1617) if (level != AF_VSOCK)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1618) return -ENOPROTOOPT;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1619)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1620) err = get_user(len, optlen);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1621) if (err != 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1622) return err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1623)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1624) #define COPY_OUT(_v) \
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1625) do { \
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1626) if (len < sizeof(_v)) \
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1627) return -EINVAL; \
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1628) \
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1629) len = sizeof(_v); \
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1630) if (copy_to_user(optval, &_v, len) != 0) \
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1631) return -EFAULT; \
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1632) \
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1633) } while (0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1634)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1635) err = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1636) sk = sock->sk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1637) vsk = vsock_sk(sk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1638)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1639) switch (optname) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1640) case SO_VM_SOCKETS_BUFFER_SIZE:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1641) val = vsk->buffer_size;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1642) COPY_OUT(val);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1643) break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1644)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1645) case SO_VM_SOCKETS_BUFFER_MAX_SIZE:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1646) val = vsk->buffer_max_size;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1647) COPY_OUT(val);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1648) break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1649)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1650) case SO_VM_SOCKETS_BUFFER_MIN_SIZE:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1651) val = vsk->buffer_min_size;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1652) COPY_OUT(val);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1653) break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1654)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1655) case SO_VM_SOCKETS_CONNECT_TIMEOUT: {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1656) struct __kernel_old_timeval tv;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1657) tv.tv_sec = vsk->connect_timeout / HZ;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1658) tv.tv_usec =
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1659) (vsk->connect_timeout -
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1660) tv.tv_sec * HZ) * (1000000 / HZ);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1661) COPY_OUT(tv);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1662) break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1663) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1664) default:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1665) return -ENOPROTOOPT;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1666) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1667)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1668) err = put_user(len, optlen);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1669) if (err != 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1670) return -EFAULT;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1671)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1672) #undef COPY_OUT
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1673)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1674) return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1675) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1676)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1677) static int vsock_stream_sendmsg(struct socket *sock, struct msghdr *msg,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1678) size_t len)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1679) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1680) struct sock *sk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1681) struct vsock_sock *vsk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1682) const struct vsock_transport *transport;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1683) ssize_t total_written;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1684) long timeout;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1685) int err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1686) struct vsock_transport_send_notify_data send_data;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1687) DEFINE_WAIT_FUNC(wait, woken_wake_function);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1688)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1689) sk = sock->sk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1690) vsk = vsock_sk(sk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1691) total_written = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1692) err = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1693)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1694) if (msg->msg_flags & MSG_OOB)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1695) return -EOPNOTSUPP;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1696)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1697) lock_sock(sk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1698)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1699) transport = vsk->transport;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1700)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1701) /* Callers should not provide a destination with stream sockets. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1702) if (msg->msg_namelen) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1703) err = sk->sk_state == TCP_ESTABLISHED ? -EISCONN : -EOPNOTSUPP;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1704) goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1705) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1706)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1707) /* Send data only if both sides are not shutdown in the direction. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1708) if (sk->sk_shutdown & SEND_SHUTDOWN ||
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1709) vsk->peer_shutdown & RCV_SHUTDOWN) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1710) err = -EPIPE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1711) goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1712) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1713)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1714) if (!transport || sk->sk_state != TCP_ESTABLISHED ||
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1715) !vsock_addr_bound(&vsk->local_addr)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1716) err = -ENOTCONN;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1717) goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1718) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1719)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1720) if (!vsock_addr_bound(&vsk->remote_addr)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1721) err = -EDESTADDRREQ;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1722) goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1723) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1724)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1725) /* Wait for room in the produce queue to enqueue our user's data. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1726) timeout = sock_sndtimeo(sk, msg->msg_flags & MSG_DONTWAIT);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1727)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1728) err = transport->notify_send_init(vsk, &send_data);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1729) if (err < 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1730) goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1731)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1732) while (total_written < len) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1733) ssize_t written;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1734)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1735) add_wait_queue(sk_sleep(sk), &wait);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1736) while (vsock_stream_has_space(vsk) == 0 &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1737) sk->sk_err == 0 &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1738) !(sk->sk_shutdown & SEND_SHUTDOWN) &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1739) !(vsk->peer_shutdown & RCV_SHUTDOWN)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1740)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1741) /* Don't wait for non-blocking sockets. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1742) if (timeout == 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1743) err = -EAGAIN;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1744) remove_wait_queue(sk_sleep(sk), &wait);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1745) goto out_err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1746) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1747)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1748) err = transport->notify_send_pre_block(vsk, &send_data);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1749) if (err < 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1750) remove_wait_queue(sk_sleep(sk), &wait);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1751) goto out_err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1752) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1753)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1754) release_sock(sk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1755) timeout = wait_woken(&wait, TASK_INTERRUPTIBLE, timeout);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1756) lock_sock(sk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1757) if (signal_pending(current)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1758) err = sock_intr_errno(timeout);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1759) remove_wait_queue(sk_sleep(sk), &wait);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1760) goto out_err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1761) } else if (timeout == 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1762) err = -EAGAIN;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1763) remove_wait_queue(sk_sleep(sk), &wait);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1764) goto out_err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1765) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1766) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1767) remove_wait_queue(sk_sleep(sk), &wait);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1768)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1769) /* These checks occur both as part of and after the loop
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1770) * conditional since we need to check before and after
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1771) * sleeping.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1772) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1773) if (sk->sk_err) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1774) err = -sk->sk_err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1775) goto out_err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1776) } else if ((sk->sk_shutdown & SEND_SHUTDOWN) ||
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1777) (vsk->peer_shutdown & RCV_SHUTDOWN)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1778) err = -EPIPE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1779) goto out_err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1780) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1781)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1782) err = transport->notify_send_pre_enqueue(vsk, &send_data);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1783) if (err < 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1784) goto out_err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1785)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1786) /* Note that enqueue will only write as many bytes as are free
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1787) * in the produce queue, so we don't need to ensure len is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1788) * smaller than the queue size. It is the caller's
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1789) * responsibility to check how many bytes we were able to send.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1790) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1791)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1792) written = transport->stream_enqueue(
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1793) vsk, msg,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1794) len - total_written);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1795) if (written < 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1796) err = -ENOMEM;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1797) goto out_err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1798) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1799)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1800) total_written += written;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1801)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1802) err = transport->notify_send_post_enqueue(
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1803) vsk, written, &send_data);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1804) if (err < 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1805) goto out_err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1806)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1807) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1808)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1809) out_err:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1810) if (total_written > 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1811) err = total_written;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1812) out:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1813) release_sock(sk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1814) return err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1815) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1816)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1817)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1818) static int
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1819) vsock_stream_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1820) int flags)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1821) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1822) struct sock *sk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1823) struct vsock_sock *vsk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1824) const struct vsock_transport *transport;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1825) int err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1826) size_t target;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1827) ssize_t copied;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1828) long timeout;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1829) struct vsock_transport_recv_notify_data recv_data;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1830)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1831) DEFINE_WAIT(wait);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1832)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1833) sk = sock->sk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1834) vsk = vsock_sk(sk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1835) err = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1836)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1837) lock_sock(sk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1838)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1839) transport = vsk->transport;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1840)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1841) if (!transport || sk->sk_state != TCP_ESTABLISHED) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1842) /* Recvmsg is supposed to return 0 if a peer performs an
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1843) * orderly shutdown. Differentiate between that case and when a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1844) * peer has not connected or a local shutdown occured with the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1845) * SOCK_DONE flag.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1846) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1847) if (sock_flag(sk, SOCK_DONE))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1848) err = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1849) else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1850) err = -ENOTCONN;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1851)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1852) goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1853) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1854)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1855) if (flags & MSG_OOB) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1856) err = -EOPNOTSUPP;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1857) goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1858) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1859)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1860) /* We don't check peer_shutdown flag here since peer may actually shut
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1861) * down, but there can be data in the queue that a local socket can
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1862) * receive.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1863) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1864) if (sk->sk_shutdown & RCV_SHUTDOWN) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1865) err = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1866) goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1867) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1868)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1869) /* It is valid on Linux to pass in a zero-length receive buffer. This
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1870) * is not an error. We may as well bail out now.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1871) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1872) if (!len) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1873) err = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1874) goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1875) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1876)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1877) /* We must not copy less than target bytes into the user's buffer
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1878) * before returning successfully, so we wait for the consume queue to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1879) * have that much data to consume before dequeueing. Note that this
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1880) * makes it impossible to handle cases where target is greater than the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1881) * queue size.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1882) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1883) target = sock_rcvlowat(sk, flags & MSG_WAITALL, len);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1884) if (target >= transport->stream_rcvhiwat(vsk)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1885) err = -ENOMEM;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1886) goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1887) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1888) timeout = sock_rcvtimeo(sk, flags & MSG_DONTWAIT);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1889) copied = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1890)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1891) err = transport->notify_recv_init(vsk, target, &recv_data);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1892) if (err < 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1893) goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1894)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1895)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1896) while (1) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1897) s64 ready;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1898)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1899) prepare_to_wait(sk_sleep(sk), &wait, TASK_INTERRUPTIBLE);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1900) ready = vsock_stream_has_data(vsk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1901)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1902) if (ready == 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1903) if (sk->sk_err != 0 ||
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1904) (sk->sk_shutdown & RCV_SHUTDOWN) ||
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1905) (vsk->peer_shutdown & SEND_SHUTDOWN)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1906) finish_wait(sk_sleep(sk), &wait);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1907) break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1908) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1909) /* Don't wait for non-blocking sockets. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1910) if (timeout == 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1911) err = -EAGAIN;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1912) finish_wait(sk_sleep(sk), &wait);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1913) break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1914) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1915)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1916) err = transport->notify_recv_pre_block(
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1917) vsk, target, &recv_data);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1918) if (err < 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1919) finish_wait(sk_sleep(sk), &wait);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1920) break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1921) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1922) release_sock(sk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1923) timeout = schedule_timeout(timeout);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1924) lock_sock(sk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1925)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1926) if (signal_pending(current)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1927) err = sock_intr_errno(timeout);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1928) finish_wait(sk_sleep(sk), &wait);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1929) break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1930) } else if (timeout == 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1931) err = -EAGAIN;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1932) finish_wait(sk_sleep(sk), &wait);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1933) break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1934) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1935) } else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1936) ssize_t read;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1937)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1938) finish_wait(sk_sleep(sk), &wait);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1939)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1940) if (ready < 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1941) /* Invalid queue pair content. XXX This should
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1942) * be changed to a connection reset in a later
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1943) * change.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1944) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1945)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1946) err = -ENOMEM;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1947) goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1948) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1949)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1950) err = transport->notify_recv_pre_dequeue(
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1951) vsk, target, &recv_data);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1952) if (err < 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1953) break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1954)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1955) read = transport->stream_dequeue(
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1956) vsk, msg,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1957) len - copied, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1958) if (read < 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1959) err = -ENOMEM;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1960) break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1961) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1962)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1963) copied += read;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1964)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1965) err = transport->notify_recv_post_dequeue(
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1966) vsk, target, read,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1967) !(flags & MSG_PEEK), &recv_data);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1968) if (err < 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1969) goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1970)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1971) if (read >= target || flags & MSG_PEEK)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1972) break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1973)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1974) target -= read;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1975) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1976) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1977)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1978) if (sk->sk_err)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1979) err = -sk->sk_err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1980) else if (sk->sk_shutdown & RCV_SHUTDOWN)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1981) err = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1982)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1983) if (copied > 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1984) err = copied;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1985)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1986) out:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1987) release_sock(sk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1988) return err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1989) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1990)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1991) static const struct proto_ops vsock_stream_ops = {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1992) .family = PF_VSOCK,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1993) .owner = THIS_MODULE,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1994) .release = vsock_release,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1995) .bind = vsock_bind,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1996) .connect = vsock_stream_connect,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1997) .socketpair = sock_no_socketpair,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1998) .accept = vsock_accept,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1999) .getname = vsock_getname,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2000) .poll = vsock_poll,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2001) .ioctl = sock_no_ioctl,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2002) .listen = vsock_listen,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2003) .shutdown = vsock_shutdown,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2004) .setsockopt = vsock_stream_setsockopt,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2005) .getsockopt = vsock_stream_getsockopt,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2006) .sendmsg = vsock_stream_sendmsg,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2007) .recvmsg = vsock_stream_recvmsg,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2008) .mmap = sock_no_mmap,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2009) .sendpage = sock_no_sendpage,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2010) };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2011)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2012) static int vsock_create(struct net *net, struct socket *sock,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2013) int protocol, int kern)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2014) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2015) struct vsock_sock *vsk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2016) struct sock *sk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2017) int ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2018)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2019) if (!sock)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2020) return -EINVAL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2021)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2022) if (protocol && protocol != PF_VSOCK)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2023) return -EPROTONOSUPPORT;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2024)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2025) switch (sock->type) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2026) case SOCK_DGRAM:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2027) sock->ops = &vsock_dgram_ops;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2028) break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2029) case SOCK_STREAM:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2030) sock->ops = &vsock_stream_ops;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2031) break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2032) default:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2033) return -ESOCKTNOSUPPORT;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2034) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2035)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2036) sock->state = SS_UNCONNECTED;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2037)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2038) sk = __vsock_create(net, sock, NULL, GFP_KERNEL, 0, kern);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2039) if (!sk)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2040) return -ENOMEM;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2041)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2042) vsk = vsock_sk(sk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2043)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2044) if (sock->type == SOCK_DGRAM) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2045) ret = vsock_assign_transport(vsk, NULL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2046) if (ret < 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2047) sock_put(sk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2048) return ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2049) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2050) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2051)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2052) vsock_insert_unbound(vsk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2053)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2054) return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2055) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2056)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2057) static const struct net_proto_family vsock_family_ops = {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2058) .family = AF_VSOCK,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2059) .create = vsock_create,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2060) .owner = THIS_MODULE,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2061) };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2062)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2063) static long vsock_dev_do_ioctl(struct file *filp,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2064) unsigned int cmd, void __user *ptr)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2065) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2066) u32 __user *p = ptr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2067) u32 cid = VMADDR_CID_ANY;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2068) int retval = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2069)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2070) switch (cmd) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2071) case IOCTL_VM_SOCKETS_GET_LOCAL_CID:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2072) /* To be compatible with the VMCI behavior, we prioritize the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2073) * guest CID instead of well-know host CID (VMADDR_CID_HOST).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2074) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2075) if (transport_g2h)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2076) cid = transport_g2h->get_local_cid();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2077) else if (transport_h2g)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2078) cid = transport_h2g->get_local_cid();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2079)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2080) if (put_user(cid, p) != 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2081) retval = -EFAULT;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2082) break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2083)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2084) default:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2085) pr_err("Unknown ioctl %d\n", cmd);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2086) retval = -EINVAL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2087) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2088)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2089) return retval;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2090) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2091)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2092) static long vsock_dev_ioctl(struct file *filp,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2093) unsigned int cmd, unsigned long arg)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2094) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2095) return vsock_dev_do_ioctl(filp, cmd, (void __user *)arg);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2096) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2097)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2098) #ifdef CONFIG_COMPAT
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2099) static long vsock_dev_compat_ioctl(struct file *filp,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2100) unsigned int cmd, unsigned long arg)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2101) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2102) return vsock_dev_do_ioctl(filp, cmd, compat_ptr(arg));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2103) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2104) #endif
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2105)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2106) static const struct file_operations vsock_device_ops = {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2107) .owner = THIS_MODULE,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2108) .unlocked_ioctl = vsock_dev_ioctl,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2109) #ifdef CONFIG_COMPAT
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2110) .compat_ioctl = vsock_dev_compat_ioctl,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2111) #endif
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2112) .open = nonseekable_open,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2113) };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2114)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2115) static struct miscdevice vsock_device = {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2116) .name = "vsock",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2117) .fops = &vsock_device_ops,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2118) };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2119)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2120) static int __init vsock_init(void)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2121) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2122) int err = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2123)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2124) vsock_init_tables();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2125)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2126) vsock_proto.owner = THIS_MODULE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2127) vsock_device.minor = MISC_DYNAMIC_MINOR;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2128) err = misc_register(&vsock_device);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2129) if (err) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2130) pr_err("Failed to register misc device\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2131) goto err_reset_transport;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2132) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2133)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2134) err = proto_register(&vsock_proto, 1); /* we want our slab */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2135) if (err) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2136) pr_err("Cannot register vsock protocol\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2137) goto err_deregister_misc;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2138) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2139)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2140) err = sock_register(&vsock_family_ops);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2141) if (err) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2142) pr_err("could not register af_vsock (%d) address family: %d\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2143) AF_VSOCK, err);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2144) goto err_unregister_proto;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2145) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2146)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2147) return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2148)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2149) err_unregister_proto:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2150) proto_unregister(&vsock_proto);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2151) err_deregister_misc:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2152) misc_deregister(&vsock_device);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2153) err_reset_transport:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2154) return err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2155) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2156)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2157) static void __exit vsock_exit(void)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2158) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2159) misc_deregister(&vsock_device);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2160) sock_unregister(AF_VSOCK);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2161) proto_unregister(&vsock_proto);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2162) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2163)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2164) const struct vsock_transport *vsock_core_get_transport(struct vsock_sock *vsk)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2165) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2166) return vsk->transport;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2167) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2168) EXPORT_SYMBOL_GPL(vsock_core_get_transport);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2169)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2170) int vsock_core_register(const struct vsock_transport *t, int features)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2171) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2172) const struct vsock_transport *t_h2g, *t_g2h, *t_dgram, *t_local;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2173) int err = mutex_lock_interruptible(&vsock_register_mutex);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2174)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2175) if (err)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2176) return err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2177)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2178) t_h2g = transport_h2g;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2179) t_g2h = transport_g2h;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2180) t_dgram = transport_dgram;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2181) t_local = transport_local;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2182)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2183) if (features & VSOCK_TRANSPORT_F_H2G) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2184) if (t_h2g) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2185) err = -EBUSY;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2186) goto err_busy;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2187) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2188) t_h2g = t;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2189) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2190)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2191) if (features & VSOCK_TRANSPORT_F_G2H) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2192) if (t_g2h) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2193) err = -EBUSY;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2194) goto err_busy;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2195) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2196) t_g2h = t;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2197) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2198)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2199) if (features & VSOCK_TRANSPORT_F_DGRAM) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2200) if (t_dgram) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2201) err = -EBUSY;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2202) goto err_busy;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2203) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2204) t_dgram = t;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2205) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2206)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2207) if (features & VSOCK_TRANSPORT_F_LOCAL) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2208) if (t_local) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2209) err = -EBUSY;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2210) goto err_busy;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2211) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2212) t_local = t;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2213) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2214)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2215) transport_h2g = t_h2g;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2216) transport_g2h = t_g2h;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2217) transport_dgram = t_dgram;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2218) transport_local = t_local;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2219)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2220) err_busy:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2221) mutex_unlock(&vsock_register_mutex);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2222) return err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2223) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2224) EXPORT_SYMBOL_GPL(vsock_core_register);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2225)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2226) void vsock_core_unregister(const struct vsock_transport *t)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2227) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2228) mutex_lock(&vsock_register_mutex);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2229)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2230) if (transport_h2g == t)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2231) transport_h2g = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2232)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2233) if (transport_g2h == t)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2234) transport_g2h = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2235)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2236) if (transport_dgram == t)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2237) transport_dgram = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2238)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2239) if (transport_local == t)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2240) transport_local = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2241)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2242) mutex_unlock(&vsock_register_mutex);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2243) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2244) EXPORT_SYMBOL_GPL(vsock_core_unregister);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2245)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2246) module_init(vsock_init);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2247) module_exit(vsock_exit);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2248)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2249) MODULE_AUTHOR("VMware, Inc.");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2250) MODULE_DESCRIPTION("VMware Virtual Socket Family");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2251) MODULE_VERSION("1.0.2.0-k");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2252) MODULE_LICENSE("GPL v2");