Commit Graph

40 Commits

Author SHA1 Message Date
Kate Hsuan 3cd03b9cd8 media: subdev: Add for_each_active_route() macro
JIRA: https://issues.redhat.com/browse/RHEL-67885

commit 837f92f070f6b7e877143eb168025995688b9756
Author: Jacopo Mondi <jacopo+renesas@jmondi.org>
Date: Sun, 17 Oct 2021 20:24:42 +0200

  Add a for_each_active_route() macro to replace the repeated pattern
  of iterating on the active routes of a routing table.

  Signed-off-by: Jacopo Mondi <jacopo+renesas@jmondi.org>
  Signed-off-by: Tomi Valkeinen <tomi.valkeinen@ideasonboard.com>
  Signed-off-by: Mauro Carvalho Chehab <mchehab@kernel.org>

Signed-off-by: Kate Hsuan <hpa@redhat.com>
2024-11-27 09:40:34 +08:00
Andrew Halaney b8dd09538a printk: Prepare for SRCU console list protection
JIRA: https://issues.redhat.com/browse/RHEL-24205
Conflicts: Minor context diff in .clang-format

commit 6c4afa79147e4aa86665795695ac4a5f25e73176
Author: John Ogness <john.ogness@linutronix.de>
Date:   Wed Nov 16 17:27:15 2022 +0106

    printk: Prepare for SRCU console list protection

    Provide an NMI-safe SRCU protected variant to walk the console list.

    Note that all console fields are now set before adding the console
    to the list to avoid the console becoming visible by SCRU readers
    before being fully initialized.

    This is a preparatory change for a new console infrastructure which
    operates independent of the console BKL.

    Suggested-by: Thomas Gleixner <tglx@linutronix.de>
    Signed-off-by: John Ogness <john.ogness@linutronix.de>
    Acked-by: Miguel Ojeda <ojeda@kernel.org>
    Reviewed-by: Paul E. McKenney <paulmck@kernel.org>
    Reviewed-by: Petr Mladek <pmladek@suse.com>
    Signed-off-by: Petr Mladek <pmladek@suse.com>
    Link: https://lore.kernel.org/r/20221116162152.193147-4-john.ogness@linutronix.de

Signed-off-by: Andrew Halaney <ahalaney@redhat.com>
2024-05-09 11:25:16 -04:00
Prarit Bhargava a0ed19a458 cpumask: re-introduce constant-sized cpumask optimizations
JIRA: https://issues.redhat.com/browse/RHEL-25415

Conflicts: Minor drift issues.

commit 596ff4a09b8981790e15572e8e7bc904df5835e7
Author: Linus Torvalds <torvalds@linux-foundation.org>
Date:   Sat Mar 4 13:35:43 2023 -0800

    cpumask: re-introduce constant-sized cpumask optimizations

    Commit aa47a7c215e7 ("lib/cpumask: deprecate nr_cpumask_bits") resulted
    in the cpumask operations potentially becoming hugely less efficient,
    because suddenly the cpumask was always considered to be variable-sized.

    The optimization was then later added back in a limited form by commit
    6f9c07be9d02 ("lib/cpumask: add FORCE_NR_CPUS config option"), but that
    FORCE_NR_CPUS option is not useful in a generic kernel and more of a
    special case for embedded situations with fixed hardware.

    Instead, just re-introduce the optimization, with some changes.

    Instead of depending on CPUMASK_OFFSTACK being false, and then always
    using the full constant cpumask width, this introduces three different
    cpumask "sizes":

     - the exact size (nr_cpumask_bits) remains identical to nr_cpu_ids.

       This is used for situations where we should use the exact size.

     - the "small" size (small_cpumask_bits) is the NR_CPUS constant if it
       fits in a single word and the bitmap operations thus end up able
       to trigger the "small_const_nbits()" optimizations.

       This is used for the operations that have optimized single-word
       cases that get inlined, notably the bit find and scanning functions.

     - the "large" size (large_cpumask_bits) is the NR_CPUS constant if it
       is an sufficiently small constant that makes simple "copy" and
       "clear" operations more efficient.

       This is arbitrarily set at four words or less.

    As a an example of this situation, without this fixed size optimization,
    cpumask_clear() will generate code like

            movl    nr_cpu_ids(%rip), %edx
            addq    $63, %rdx
            shrq    $3, %rdx
            andl    $-8, %edx
            callq   memset@PLT

    on x86-64, because it would calculate the "exact" number of longwords
    that need to be cleared.

    In contrast, with this patch, using a MAX_CPU of 64 (which is quite a
    reasonable value to use), the above becomes a single

            movq $0,cpumask

    instruction instead, because instead of caring to figure out exactly how
    many CPU's the system has, it just knows that the cpumask will be a
    single word and can just clear it all.

    Note that this does end up tightening the rules a bit from the original
    version in another way: operations that set bits in the cpumask are now
    limited to the actual nr_cpu_ids limit, whereas we used to do the
    nr_cpumask_bits thing almost everywhere in the cpumask code.

    But if you just clear bits, or scan for bits, we can use the simpler
    compile-time constants.

    In the process, remove 'cpumask_complement()' and 'for_each_cpu_not()'
    which were not useful, and which fundamentally have to be limited to
    'nr_cpu_ids'.  Better remove them now than have somebody introduce use
    of them later.

    Of course, on x86-64 with MAXSMP there is no sane small compile-time
    constant for the cpumask sizes, and we end up using the actual CPU bits,
    and will generate the above kind of horrors regardless.  Please don't
    use MAXSMP unless you really expect to have machines with thousands of
    cores.

    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

Signed-off-by: Prarit Bhargava <prarit@redhat.com>
2024-03-20 09:42:41 -04:00
Jerry Snitselaar f66deef99c iommu: Add for_each_group_device()
JIRA: https://issues.redhat.com/browse/RHEL-10094
Upstream Status: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
Conflicts: Context diff in clang-format

commit 3006b15b364a34a2a19b45bb2948dd6a83c5e1fe
Author: Jason Gunthorpe <jgg@ziepe.ca>
Date:   Thu May 11 01:42:00 2023 -0300

    iommu: Add for_each_group_device()

    Convenience macro to iterate over every struct group_device in the group.

    Replace all open coded list_for_each_entry's with this macro.

    Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com>
    Reviewed-by: Kevin Tian <kevin.tian@intel.com>
    Tested-by: Heiko Stuebner <heiko@sntech.de>
    Tested-by: Niklas Schnelle <schnelle@linux.ibm.com>
    Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
    Link: https://lore.kernel.org/r/2-v5-1b99ae392328+44574-iommu_err_unwind_jgg@nvidia.com
    Signed-off-by: Joerg Roedel <jroedel@suse.de>

(cherry picked from commit 3006b15b364a34a2a19b45bb2948dd6a83c5e1fe)
Signed-off-by: Jerry Snitselaar <jsnitsel@redhat.com>
2023-10-27 01:26:58 -07:00
Myron Stowe 0a08ebe6f0 PCI: Introduce pci_dev_for_each_resource()
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2228915
Upstream Status: 09cc900632400079619e9154604fd299c2cc9a5a

commit 09cc900632400079619e9154604fd299c2cc9a5a
Author: Mika Westerberg <mika.westerberg@linux.intel.com>
Date:   Thu Mar 30 19:24:30 2023 +0300

    PCI: Introduce pci_dev_for_each_resource()

    Instead of open-coding it everywhere introduce a tiny helper that can be
    used to iterate over each resource of a PCI device, and convert the most
    obvious users into it.

    While at it drop doubled empty line before pdev_sort_resources().

    No functional changes intended.

    Suggested-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
    Link: https://lore.kernel.org/r/20230330162434.35055-4-andriy.shevchenko@linux.intel.com
    Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
    Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
    Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
    Reviewed-by: Krzysztof Wilczyński <kw@linux.com>

Signed-off-by: Myron Stowe <mstowe@redhat.com>
2023-09-05 09:16:40 -06:00
Jan Stancek 9e3cead44f Merge: Rebase VFIO and IOMMUFD up to v6.2
MR: https://gitlab.com/redhat/centos-stream/src/kernel/centos-stream-9/-/merge_requests/2131

Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2177087

JIRA: https://issues.redhat.com/browse/RHELPLAN-151389

Upstream Status: mainline

Testing: GPU, NIC PF/VF, USB, mdev assignment, coverage on Intel and AMD hosts.

Refresh vfio to v6.2, including new iommufd subsystem and vfio integration (disabled).

Signed-off-by: Alex Williamson <alex.williamson@redhat.com>

Approved-by: John W. Linville <linville@redhat.com>
Approved-by: Cédric Le Goater <clg@redhat.com>
Approved-by: Jerry Snitselaar <jsnitsel@redhat.com>
Approved-by: Mika Penttilä <mpenttil@redhat.com>
Approved-by: Eric Auger <eric.auger@redhat.com>
Approved-by: Cornelia Huck <cohuck@redhat.com>
Approved-by: Prarit Bhargava <prarit@redhat.com>
Approved-by: Phil Auld <pauld@redhat.com>

Signed-off-by: Jan Stancek <jstancek@redhat.com>
2023-04-06 14:03:52 +02:00
Jan Stancek 17db97a3eb Merge: Update kernel's PCI subsystem to v6.2
MR: https://gitlab.com/redhat/centos-stream/src/kernel/centos-stream-9/-/merge_requests/2147

```
This series updates RHEL9's PCI subsystem with content from upstream v6.2 -
  Merge tag 'pci-v6.2-changes' of git://git.kernel.org/pub/../helgaas/pci
  https://lkml.org/lkml/2022/12/13/1106
  commit c7020e1b346d5840e93b58cc4f2c67fc645d8df9
  Merge: a0a6c76cf2a5 f826afe5eae8
  81 files changed, 2748 insertions(+), 928 deletions(-)

  Merge tag 'pci-v6.2-fixes-1' of git://git.kernel.org/pub/../helgaas/pci
  https://https://lkml.org/lkml/2023/1/13/1266
  commit 9e058c2952cad4baca0464266c4b1fe68f77052b
  Merge: 92783a90bcbd fd3a8cff4d4a
  2 files changed, 39 insertions(+), 7 deletions(-)

  Merge tag 'pci-v6.2-fixes-2
  https://lkml.org/lkml/2023/2/10/1009
  commit 4cfd5afcd87eb213f08863b6f34944978b0a678d
  Merge: 4f72a263e162 ff209ecc376a
  4 files changed, 39 insertions(+), 93 deletions(-)

Two patches, 001/114 and 107/114, had minor conflicts which are covered in
their respective commit messages, otherwise patches within the series
back-ported cleanly.

Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2166398
Depends: N/A

Signed-off-by: Myron Stowe <mstowe@redhat.com>
```

Approved-by: John W. Linville <linville@redhat.com>
Approved-by: Steve Best <sbest@redhat.com>
Approved-by: Prarit Bhargava <prarit@redhat.com>
Approved-by: Tony Camuso <tcamuso@redhat.com>
Approved-by: Lenny Szubowicz <lszubowi@redhat.com>
Approved-by: Jerry Snitselaar <jsnitsel@redhat.com>

Signed-off-by: Jan Stancek <jstancek@redhat.com>
2023-03-27 07:05:15 +02:00
David Arcari 0bcef6076d genirq/msi: Make interrupt allocation less convoluted
Bugzilla: https://bugzilla.redhat.com/2175165

commit ef8dd01538ea2553ab101ddce6a85a321406d9c0
Author: Thomas Gleixner <tglx@linutronix.de>
Date:   Mon Dec 6 23:51:44 2021 +0100

    genirq/msi: Make interrupt allocation less convoluted

    There is no real reason to do several loops over the MSI descriptors
    instead of just doing one loop. In case of an error everything is undone
    anyway so it does not matter whether it's a partial or a full rollback.

    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Tested-by: Michael Kelley <mikelley@microsoft.com>
    Tested-by: Nishanth Menon <nm@ti.com>
    Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
    Link: https://lore.kernel.org/r/20211206210749.010234767@linutronix.de

Signed-off-by: David Arcari <darcari@redhat.com>
2023-03-13 09:52:35 -04:00
Alex Williamson ddeeb189b4 iommufd: Data structure to provide IOVA to PFN mapping
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2177087
JIRA: https://issues.redhat.com/browse/RHELPLAN-151389
Conflicts: clang-format out of sync, position correct relative to
           previous additions.

commit 51fe6141f0f64ae0bbc096a41a07572273e8c0ef
Author: Jason Gunthorpe <jgg@ziepe.ca>
Date:   Tue Nov 29 16:29:33 2022 -0400

    iommufd: Data structure to provide IOVA to PFN mapping

    This is the remainder of the IOAS data structure. Provide an object called
    an io_pagetable that is composed of iopt_areas pointing at iopt_pages,
    along with a list of iommu_domains that mirror the IOVA to PFN map.

    At the top this is a simple interval tree of iopt_areas indicating the map
    of IOVA to iopt_pages. An xarray keeps track of a list of domains. Based
    on the attached domains there is a minimum alignment for areas (which may
    be smaller than PAGE_SIZE), an interval tree of reserved IOVA that can't
    be mapped and an IOVA of allowed IOVA that can always be mappable.

    The concept of an 'access' refers to something like a VFIO mdev that is
    accessing the IOVA and using a 'struct page *' for CPU based access.

    Externally an API is provided that matches the requirements of the IOCTL
    interface for map/unmap and domain attachment.

    The API provides a 'copy' primitive to establish a new IOVA map in a
    different IOAS from an existing mapping by re-using the iopt_pages. This
    is the basic mechanism to provide single pinning.

    This is designed to support a pre-registration flow where userspace would
    setup an dummy IOAS with no domains, map in memory and then establish an
    access to pin all PFNs into the xarray.

    Copy can then be used to create new IOVA mappings in a different IOAS,
    with iommu_domains attached. Upon copy the PFNs will be read out of the
    xarray and mapped into the iommu_domains, avoiding any pin_user_pages()
    overheads.

    Link: https://lore.kernel.org/r/10-v6-a196d26f289e+11787-iommufd_jgg@nvidia.com
    Tested-by: Nicolin Chen <nicolinc@nvidia.com>
    Tested-by: Yi Liu <yi.l.liu@intel.com>
    Tested-by: Lixiao Yang <lixiao.yang@intel.com>
    Tested-by: Matthew Rosato <mjrosato@linux.ibm.com>
    Reviewed-by: Kevin Tian <kevin.tian@intel.com>
    Signed-off-by: Yi Liu <yi.l.liu@intel.com>
    Signed-off-by: Nicolin Chen <nicolinc@nvidia.com>
    Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>

Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
2023-03-09 16:36:08 -07:00
Alex Williamson cd16a1821b iommufd: PFN handling for iopt_pages
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2177087
JIRA: https://issues.redhat.com/browse/RHELPLAN-151389
Conflicts: clang-format out of sync, position correct alphabetically

commit f394576eb11dbcd3a740fa41e577b97f0720d26e
Author: Jason Gunthorpe <jgg@ziepe.ca>
Date:   Tue Nov 29 16:29:31 2022 -0400

    iommufd: PFN handling for iopt_pages

    The top of the data structure provides an IO Address Space (IOAS) that is
    similar to a VFIO container. The IOAS allows map/unmap of memory into
    ranges of IOVA called iopt_areas. Multiple IOMMU domains (IO page tables)
    and in-kernel accesses (like VFIO mdevs) can be attached to the IOAS to
    access the PFNs that those IOVA areas cover.

    The IO Address Space (IOAS) datastructure is composed of:
     - struct io_pagetable holding the IOVA map
     - struct iopt_areas representing populated portions of IOVA
     - struct iopt_pages representing the storage of PFNs
     - struct iommu_domain representing each IO page table in the system IOMMU
     - struct iopt_pages_access representing in-kernel accesses of PFNs (ie
       VFIO mdevs)
     - struct xarray pinned_pfns holding a list of pages pinned by in-kernel
       accesses

    This patch introduces the lowest part of the datastructure - the movement
    of PFNs in a tiered storage scheme:
     1) iopt_pages::pinned_pfns xarray
     2) Multiple iommu_domains
     3) The origin of the PFNs, i.e. the userspace pointer

    PFN have to be copied between all combinations of tiers, depending on the
    configuration.

    The interface is an iterator called a 'pfn_reader' which determines which
    tier each PFN is stored and loads it into a list of PFNs held in a struct
    pfn_batch.

    Each step of the iterator will fill up the pfn_batch, then the caller can
    use the pfn_batch to send the PFNs to the required destination. Repeating
    this loop will read all the PFNs in an IOVA range.

    The pfn_reader and pfn_batch also keep track of the pinned page accounting.

    While PFNs are always stored and accessed as full PAGE_SIZE units the
    iommu_domain tier can store with a sub-page offset/length to support
    IOMMUs with a smaller IOPTE size than PAGE_SIZE.

    Link: https://lore.kernel.org/r/8-v6-a196d26f289e+11787-iommufd_jgg@nvidia.com
    Reviewed-by: Kevin Tian <kevin.tian@intel.com>
    Tested-by: Nicolin Chen <nicolinc@nvidia.com>
    Tested-by: Yi Liu <yi.l.liu@intel.com>
    Tested-by: Lixiao Yang <lixiao.yang@intel.com>
    Tested-by: Matthew Rosato <mjrosato@linux.ibm.com>
    Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>

Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
2023-03-09 16:36:08 -07:00
Alex Williamson b339361f99 interval-tree: Add a utility to iterate over spans in an interval tree
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2177087
JIRA: https://issues.redhat.com/browse/RHELPLAN-151389
Conflicts: clang-format out of sync, position correct alphabetically

commit 5fe937862c8426f24cd1dcbf7c22fb1a31069b4f
Author: Jason Gunthorpe <jgg@ziepe.ca>
Date:   Tue Nov 29 16:29:26 2022 -0400

    interval-tree: Add a utility to iterate over spans in an interval tree

    The span iterator travels over the indexes of the interval_tree, not the
    nodes, and classifies spans of indexes as either 'used' or 'hole'.

    'used' spans are fully covered by nodes in the tree and 'hole' spans have
    no node intersecting the span.

    This is done greedily such that spans are maximally sized and every
    iteration step switches between used/hole.

    As an example a trivial allocator can be written as:

            for (interval_tree_span_iter_first(&span, itree, 0, ULONG_MAX);
                 !interval_tree_span_iter_done(&span);
                 interval_tree_span_iter_next(&span))
                    if (span.is_hole &&
                        span.last_hole - span.start_hole >= allocation_size - 1)
                            return span.start_hole;

    With all the tricky boundary conditions handled by the library code.

    The following iommufd patches have several algorithms for its overlapping
    node interval trees that are significantly simplified with this kind of
    iteration primitive. As it seems generally useful, put it into lib/.

    Link: https://lore.kernel.org/r/3-v6-a196d26f289e+11787-iommufd_jgg@nvidia.com
    Reviewed-by: Kevin Tian <kevin.tian@intel.com>
    Reviewed-by: Eric Auger <eric.auger@redhat.com>
    Tested-by: Nicolin Chen <nicolinc@nvidia.com>
    Tested-by: Yi Liu <yi.l.liu@intel.com>
    Tested-by: Lixiao Yang <lixiao.yang@intel.com>
    Tested-by: Matthew Rosato <mjrosato@linux.ibm.com>
    Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>

Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
2023-03-09 16:36:08 -07:00
Myron Stowe 46482b9643 PCI/DOE: Add DOE mailbox support functions
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2166398
Upstream Status: 9d24322e887b6a3d3f9f9c3e76937a646102c8c1

commit 9d24322e887b6a3d3f9f9c3e76937a646102c8c1
Author: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Date:   Tue Jul 19 13:52:46 2022 -0700

    PCI/DOE: Add DOE mailbox support functions

    Introduced in a PCIe r6.0, sec 6.30, DOE provides a config space based
    mailbox with standard protocol discovery.  Each mailbox is accessed
    through a DOE Extended Capability.

    Each DOE mailbox must support the DOE discovery protocol in addition to
    any number of additional protocols.

    Define core PCIe functionality to manage a single PCIe DOE mailbox at a
    defined config space offset.  Functionality includes iterating,
    creating, query of supported protocol, and task submission.  Destruction
    of the mailboxes is device managed.

    Cc: "Li, Ming" <ming4.li@intel.com>
    Cc: Bjorn Helgaas <helgaas@kernel.org>
    Cc: Matthew Wilcox <willy@infradead.org>
    Acked-by: Bjorn Helgaas <helgaas@kernel.org>
    Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
    Co-developed-by: Ira Weiny <ira.weiny@intel.com>
    Signed-off-by: Ira Weiny <ira.weiny@intel.com>
    Link: https://lore.kernel.org/r/20220719205249.566684-4-ira.weiny@intel.com
    Signed-off-by: Dan Williams <dan.j.williams@intel.com>

Signed-off-by: Myron Stowe <mstowe@redhat.com>
2023-03-07 07:21:24 -07:00
Guillaume Nault 591dc0c04b inet: ping: use hlist_nulls rcu iterator during lookup
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2162116
Upstream Status: linux.git
Conflicts: (context) Different set of ForEachMacros in .clang-format.

commit c25b7a7a565e5eeb2459b37583eea67942057511
Author: Florian Westphal <fw@strlen.de>
Date:   Tue Nov 29 15:06:44 2022 +0100

    inet: ping: use hlist_nulls rcu iterator during lookup

    ping_lookup() does not acquire the table spinlock, so iteration should
    use hlist_nulls_for_each_entry_rcu().

    Spotted during code review.

    Fixes: dbca1596bbb0 ("ping: convert to RCU lookups, get rid of rwlock")
    Cc: Eric Dumazet <edumazet@google.com>
    Signed-off-by: Florian Westphal <fw@strlen.de>
    Link: https://lore.kernel.org/r/20221129140644.28525-1-fw@strlen.de
    Signed-off-by: Paolo Abeni <pabeni@redhat.com>

Signed-off-by: Guillaume Nault <gnault@redhat.com>
2023-01-18 20:48:15 +01:00
Miguel Ojeda 4792f9dd12 clang-format: Update with the latest for_each macro list
Re-run the shell fragment that generated the original list.

Signed-off-by: Miguel Ojeda <ojeda@kernel.org>
2021-05-12 23:32:39 +02:00
Linus Torvalds 825d150875 cxl for 5.12
Introduce an initial driver for CXL 2.0 Type-3 Memory Devices. CXL is
 Compute Express Link which released the 2.0 specification in November.
 The Linux relevant changes in CXL 2.0 are support for an OS to
 dynamically assign address space to memory devices, support for
 switches, persistent memory, and hotplug. A Type-3 Memory Device is a
 PCI enumerated device presenting the CXL Memory Device Class Code and
 implementing the CXL.mem protocol. CXL.mem allows device to advertise
 CPU and I/O coherent memory to the system, i.e. typical "System RAM" and
 "Persistent Memory" in Linux /proc/iomem terms.
 
 In addition to the CXL.mem fast path there is an administrative command
 hardware mailbox interface for maintenance and provisioning. It is this
 command interface that is the focus of the initial driver. With this
 driver a CXL device that is mapped by the BIOS can be administered by
 Linux. Linux support for CXL PMEM and dynamic CXL address space
 management are to be implemented post v5.12.
 
 4cdadfd5e0 cxl/mem: Introduce a driver for CXL-2.0-Type-3 endpoints
 Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
 
 8adaf747c9 cxl/mem: Find device capabilities
 Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
 
 b39cb1052a cxl/mem: Register CXL memX devices
 Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
 
 13237183c7 cxl/mem: Add a "RAW" send command
 Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
 
 472b1ce6e9 cxl/mem: Enable commands via CEL
 Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
 
 57ee605b97 cxl/mem: Add set of informational commands
 Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEEf41QbsdZzFdA8EfZHtKRamZ9iAIFAmA1xV0ACgkQHtKRamZ9
 iALEMQ/8Ce45LCh0oWh8FsSZ50i1KRwKGwpYNCiutTYLBArpBZXJdE1ZRFFCKgi9
 ahMs29KSsj/60vG/DYuOwZBKClUiqOQHmtCRUQbb5wGxb7q8f2AKQSPOJ+Nn0nJE
 kgstMnkqe/neAlNDeMRdZcoBku2++hWjVVnz8QqE5Py3v3T+uEU5Au3fIhnCyvk5
 usXcH8Y6R7Lb3BxT4z3DKumaRfoxIsQlH5XFbnUbgwlkE7KHoyAagZluJqh3cZpo
 sZrCpwG5Onw8rKqfLl//CZ8FfBjE2XfSqJkEPCCMfZUhI78sGGdmHL3ElM9/MNIB
 neGs3dQ5lkTaiw0nCqFtZMvDZEUsIgXPLiBByG22TM3/aIMmLqbJzeYG6UHENwC+
 hLZDV/WJNLRfeUVppt+6PgcOgjTUjNV45SdVryf10Kh3NPZh7m6OPeqG/QTKHMv9
 EgbFGihZF3NcSwvf5mdQNIMlnEL0WxOl/I+bSszYPXP6l38btegHR75gUXu7UGwl
 9LQhkVEQL8UmfRKX2HaG6h8hyTUOf1kQiXgvchLxYLKHXSc0J/wAwCa0w3jw1m5r
 bdcVQx3JcBv2S1tUHp+wMqHDbLSGpeE5nF3emWabttsjSUmlb1LuAQxgrdyBtQi9
 o5v6dDLOTmAH4sAt96HWKDzpUIMix3YmO3YSghYtNrwWUylQLuA=
 =3SAQ
 -----END PGP SIGNATURE-----

Merge tag 'cxl-for-5.12' of git://git.kernel.org/pub/scm/linux/kernel/git/nvdimm/nvdimm

Pull initial support for CXL (Compute Express Link) from Dan Williams:
 "Introduce an initial driver for CXL 2.0 Type-3 Memory Devices.

  CXL is Compute Express Link which released the 2.0 specification in
  November. The Linux relevant changes in CXL 2.0 are support for an OS
  to dynamically assign address space to memory devices, support for
  switches, persistent memory, and hotplug.

  A Type-3 Memory Device is a PCI enumerated device presenting the CXL
  Memory Device Class Code and implementing the CXL.mem protocol.
  CXL.mem allows device to advertise CPU and I/O coherent memory to the
  system, i.e. typical "System RAM" and "Persistent Memory" in Linux
  /proc/iomem terms.

  In addition to the CXL.mem fast path there is an administrative
  command hardware mailbox interface for maintenance and provisioning.
  It is this command interface that is the focus of the initial driver.
  With this driver a CXL device that is mapped by the BIOS can be
  administered by Linux.

  Linux support for CXL PMEM and dynamic CXL address space management
  are to be implemented post v5.12"

Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
  4cdadfd5e0 ("cxl/mem: Introduce a driver for CXL-2.0-Type-3 endpoints")
  13237183c7 ("cxl/mem: Add a "RAW" send command")
  472b1ce6e9 ("cxl/mem: Enable commands via CEL")
  57ee605b97 ("cxl/mem: Add set of informational commands")

Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
  8adaf747c9 ("cxl/mem: Find device capabilities")
  b39cb1052a ("cxl/mem: Register CXL memX devices")

* tag 'cxl-for-5.12' of git://git.kernel.org/pub/scm/linux/kernel/git/nvdimm/nvdimm:
  cxl/mem: Fix potential memory leak
  cxl/mem: Return -EFAULT if copy_to_user() fails
  MAINTAINERS: Add maintainers of the CXL driver
  cxl/mem: Add set of informational commands
  cxl/mem: Enable commands via CEL
  cxl/mem: Add a "RAW" send command
  cxl/mem: Add basic IOCTL interface
  cxl/mem: Register CXL memX devices
  cxl/mem: Find device capabilities
  cxl/mem: Introduce a driver for CXL-2.0-Type-3 endpoints
2021-02-24 09:38:36 -08:00
Ben Widawsky 583fa5e71c cxl/mem: Add basic IOCTL interface
Add a straightforward IOCTL that provides a mechanism for userspace to
query the supported memory device commands. CXL commands as they appear
to userspace are described as part of the UAPI kerneldoc. The command
list returned via this IOCTL will contain the full set of commands that
the driver supports, however, some of those commands may not be
available for use by userspace.

Memory device commands first appear in the CXL 2.0 specification. They
are submitted through a mailbox mechanism specified in the CXL 2.0
specification.

The send command allows userspace to issue mailbox commands directly to
the hardware. The list of available commands to send are the output of
the query command. The driver verifies basic properties of the command
and possibly inspect the input (or output) payload to determine whether
or not the command is allowed (or might taint the kernel).

Reported-by: kernel test robot <lkp@intel.com> # bug in earlier revision
Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Reviewed-by: Dan Williams <dan.j.williams@intel.com> (v2)
Cc: Al Viro <viro@zeniv.linux.org.uk>
Link: https://lore.kernel.org/r/20210217040958.1354670-5-ben.widawsky@intel.com
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2021-02-16 20:36:38 -08:00
Miguel Ojeda 1074f8ec28 clang-format: Update with the latest for_each macro list
Re-run the shell fragment that generated the original list.

Signed-off-by: Miguel Ojeda <ojeda@kernel.org>
2021-01-29 15:00:23 +01:00
Linus Torvalds a1e16bc7d5 RDMA 5.10 pull request
The typical set of driver updates across the subsystem:
 
  - Driver minor changes and bug fixes for mlx5, efa, rxe, vmw_pvrdma, hns,
    usnic, qib, qedr, cxgb4, hns, bnxt_re
 
  - Various rtrs fixes and updates
 
  - Bug fix for mlx4 CM emulation for virtualization scenarios where MRA
    wasn't working right
 
  - Use tracepoints instead of pr_debug in the CM code
 
  - Scrub the locking in ucma and cma to close more syzkaller bugs
 
  - Use tasklet_setup in the subsystem
 
  - Revert the idea that 'destroy' operations are not allowed to fail at
    the driver level. This proved unworkable from a HW perspective.
 
  - Revise how the umem API works so drivers make fewer mistakes using it
 
  - XRC support for qedr
 
  - Convert uverbs objects RWQ and MW to new the allocation scheme
 
  - Large queue entry sizes for hns
 
  - Use hmm_range_fault() for mlx5 On Demand Paging
 
  - uverbs APIs to inspect the GID table instead of sysfs
 
  - Move some of the RDMA code for building large page SGLs into
    lib/scatterlist
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEfB7FMLh+8QxL+6i3OG33FX4gmxoFAl+J37MACgkQOG33FX4g
 mxrKfRAAnIecwdE8df0yvVU5k0Eg6qVjMy9MMHq4va9m7g6GpUcNNI0nIlOASxH2
 l+9vnUQS3ebgsPeECaDYzEr0hh/u53+xw2g4WV5ts/hE8KkQ6erruXb9kasCe8yi
 5QWJ9K36T3c03Cd3EeH6JVtytAxuH42ombfo9BkFLPVyfG/R2tsAzvm5pVi73lxk
 46wtU1Bqi4tsLhyCbifn1huNFGbHp08OIBPAIKPUKCA+iBRPaWS+Dpi+93h3g3Bp
 oJwDhL9CBCGcHM+rKWLzek3Dy87FnQn7R1wmTpUFwkK+4AH3U/XazivhX035w1vL
 YJyhakVU0kosHlX9hJTNKDHJGkt0YEV2mS8dxAuqilFBtdnrVszb5/MirvlzC310
 /b5xCPSEusv9UVZV0G4zbySVNA9knZ4YaRiR3VDVMLKl/pJgTOwEiHIIx+vs3ejk
 p8GRWa1SjXw5LfZEQcq39J689ljt6xjCTonyuBSv7vSQq5v8pjBxvHxiAe2FIa2a
 ZyZeSCYoSh0SwJQukO2VO7aprhHP3TcCJ/987+X03LQ8tV2VWPktHqm62YCaDcOl
 fgiQuQdPivRjDDkJgMfDWDGKfZeHoWLKl5XsJhWByt0lablVrsvc+8ylUl1UI7gI
 16hWB/Qtlhfwg10VdApn+aOFpIS+s5P4XIp8ik57MZO+VeJzpmE=
 =LKpl
 -----END PGP SIGNATURE-----

Merge tag 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdma

Pull rdma updates from Jason Gunthorpe:
 "A usual cycle for RDMA with a typical mix of driver and core subsystem
  updates:

   - Driver minor changes and bug fixes for mlx5, efa, rxe, vmw_pvrdma,
     hns, usnic, qib, qedr, cxgb4, hns, bnxt_re

   - Various rtrs fixes and updates

   - Bug fix for mlx4 CM emulation for virtualization scenarios where
     MRA wasn't working right

   - Use tracepoints instead of pr_debug in the CM code

   - Scrub the locking in ucma and cma to close more syzkaller bugs

   - Use tasklet_setup in the subsystem

   - Revert the idea that 'destroy' operations are not allowed to fail
     at the driver level. This proved unworkable from a HW perspective.

   - Revise how the umem API works so drivers make fewer mistakes using
     it

   - XRC support for qedr

   - Convert uverbs objects RWQ and MW to new the allocation scheme

   - Large queue entry sizes for hns

   - Use hmm_range_fault() for mlx5 On Demand Paging

   - uverbs APIs to inspect the GID table instead of sysfs

   - Move some of the RDMA code for building large page SGLs into
     lib/scatterlist"

* tag 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdma: (191 commits)
  RDMA/ucma: Fix use after free in destroy id flow
  RDMA/rxe: Handle skb_clone() failure in rxe_recv.c
  RDMA/rxe: Move the definitions for rxe_av.network_type to uAPI
  RDMA: Explicitly pass in the dma_device to ib_register_device
  lib/scatterlist: Do not limit max_segment to PAGE_ALIGNED values
  IB/mlx4: Convert rej_tmout radix-tree to XArray
  RDMA/rxe: Fix bug rejecting all multicast packets
  RDMA/rxe: Fix skb lifetime in rxe_rcv_mcast_pkt()
  RDMA/rxe: Remove duplicate entries in struct rxe_mr
  IB/hfi,rdmavt,qib,opa_vnic: Update MAINTAINERS
  IB/rdmavt: Fix sizeof mismatch
  MAINTAINERS: CISCO VIC LOW LATENCY NIC DRIVER
  RDMA/bnxt_re: Fix sizeof mismatch for allocation of pbl_tbl.
  RDMA/bnxt_re: Use rdma_umem_for_each_dma_block()
  RDMA/umem: Move to allocate SG table from pages
  lib/scatterlist: Add support in dynamic allocation of SG table from pages
  tools/testing/scatterlist: Show errors in human readable form
  tools/testing/scatterlist: Rejuvenate bit-rotten test
  RDMA/ipoib: Set rtnl_link_ops for ipoib interfaces
  RDMA/uverbs: Expose the new GID query API to user space
  ...
2020-10-17 11:18:18 -07:00
Mike Rapoport cc6de16805 memblock: use separate iterators for memory and reserved regions
for_each_memblock() is used to iterate over memblock.memory in a few
places that use data from memblock_region rather than the memory ranges.

Introduce separate for_each_mem_region() and
for_each_reserved_mem_region() to improve encapsulation of memblock
internals from its users.

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Reviewed-by: Baoquan He <bhe@redhat.com>
Acked-by: Ingo Molnar <mingo@kernel.org>			[x86]
Acked-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>	[MIPS]
Acked-by: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>	[.clang-format]
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Daniel Axtens <dja@axtens.net>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Emil Renner Berthing <kernel@esmil.dk>
Cc: Hari Bathini <hbathini@linux.ibm.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Max Filippov <jcmvbkbc@gmail.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Michal Simek <monstr@monstr.eu>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Stafford Horne <shorne@gmail.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will@kernel.org>
Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
Link: https://lkml.kernel.org/r/20200818151634.14343-18-rppt@kernel.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-10-13 18:38:35 -07:00
Mike Rapoport 9f3d5eaa3c memblock: implement for_each_reserved_mem_region() using __next_mem_region()
Iteration over memblock.reserved with for_each_reserved_mem_region() used
__next_reserved_mem_region() that implemented a subset of
__next_mem_region().

Use __for_each_mem_range() and, essentially, __next_mem_region() with
appropriate parameters to reduce code duplication.

While on it, rename for_each_reserved_mem_region() to
for_each_reserved_mem_range() for consistency.

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Acked-by: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>	[.clang-format]
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Baoquan He <bhe@redhat.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Daniel Axtens <dja@axtens.net>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Emil Renner Berthing <kernel@esmil.dk>
Cc: Hari Bathini <hbathini@linux.ibm.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Max Filippov <jcmvbkbc@gmail.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Michal Simek <monstr@monstr.eu>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Stafford Horne <shorne@gmail.com>
Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will@kernel.org>
Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
Link: https://lkml.kernel.org/r/20200818151634.14343-17-rppt@kernel.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-10-13 18:38:35 -07:00
Mike Rapoport 6e245ad4a1 memblock: reduce number of parameters in for_each_mem_range()
Currently for_each_mem_range() and for_each_mem_range_rev() iterators are
the most generic way to traverse memblock regions.  As such, they have 8
parameters and they are hardly convenient to users.  Most users choose to
utilize one of their wrappers and the only user that actually needs most
of the parameters is memblock itself.

To avoid yet another naming for memblock iterators, rename the existing
for_each_mem_range[_rev]() to __for_each_mem_range[_rev]() and add a new
for_each_mem_range[_rev]() wrappers with only index, start and end
parameters.

The new wrapper nicely fits into init_unavailable_mem() and will be used
in upcoming changes to simplify memblock traversals.

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Acked-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>	[MIPS]
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Baoquan He <bhe@redhat.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Daniel Axtens <dja@axtens.net>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Emil Renner Berthing <kernel@esmil.dk>
Cc: Hari Bathini <hbathini@linux.ibm.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Max Filippov <jcmvbkbc@gmail.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Michal Simek <monstr@monstr.eu>
Cc: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Stafford Horne <shorne@gmail.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will@kernel.org>
Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
Link: https://lkml.kernel.org/r/20200818151634.14343-11-rppt@kernel.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-10-13 18:38:35 -07:00
Jason Gunthorpe 5dee5872f8 Merge branch 'mlx5_active_speed' into rdma.git for-next
Leon Romanovsky says:

====================
IBTA declares speed as 16 bits, but kernel stores it in u8. This series
fixes in-kernel declaration while keeping external interface intact.
====================

Based on the mlx5-next branch at
     git://git.kernel.org/pub/scm/linux/kernel/git/mellanox/linux
due to dependencies.

* branch 'mlx5_active_speed':
  RDMA: Fix link active_speed size
  RDMA/mlx5: Delete duplicated mlx5_ptys_width enum
  net/mlx5: Refactor query port speed functions
2020-09-18 10:31:45 -03:00
Jason Gunthorpe ebc24096c4 RDMA/umem: Add rdma_umem_for_each_dma_block()
This helper does the same as rdma_for_each_block(), except it works on a
umem. This simplifies most of the call sites.

Link: https://lore.kernel.org/r/4-v2-270386b7e60b+28f4-umem_1_jgg@nvidia.com
Acked-by: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>
Acked-by: Shiraz Saleem <shiraz.saleem@intel.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-09-09 15:33:17 -03:00
Miguel Ojeda 4e4bb89446 clang-format: Update with the latest for_each macro list
Re-run the shell fragment that generated the original list.

Signed-off-by: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>
2020-09-01 12:53:42 +02:00
Omar Sandoval 1072c12d7d block: add bio_for_each_bvec_all()
An upcoming Btrfs fix needs to know the original size of a non-cloned
bios. Rather than accessing the bvec table directly, let's add a
bio_for_each_bvec_all() accessor.

Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Signed-off-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-25 11:25:24 +02:00
Miguel Ojeda 5d65a0218f clang-format: Update with the latest for_each macro list
Re-run the shell fragment that generated the original list.

Signed-off-by: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>
2020-04-18 13:49:33 +02:00
Ian Rogers c90f3b8c4b clang-format: don't indent namespaces
This change doesn't affect existing code. Inner namespace indentation
can lead to a lot of indentation in the case of anonymous namespaces and
the like, impeding readability. Of the clang-format builtin styles
LLVM, Google, Chromium and Mozilla use None while WebKit uses Inner.

Signed-off-by: Ian Rogers <irogers@google.com>
Signed-off-by: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>
2020-04-18 13:45:18 +02:00
Miguel Ojeda 11a4a8f73b clang-format: Update with the latest for_each macro list
Re-run the shell fragment that generated the original list.

Signed-off-by: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>
2020-03-06 21:50:05 +01:00
Miguel Ojeda 52d083472e clang-format: Update with the latest for_each macro list
Re-run the shell fragment that generated the original list.

Signed-off-by: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>
2019-08-31 10:00:51 +02:00
David S. Miller 6b0a7f84ea Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net
Conflict resolution of af_smc.c from Stephen Rothwell.

Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-17 11:26:25 -07:00
Miguel Ojeda f16628d6e8 clang-format: Update with the latest for_each macro list
Re-run the shell fragment that generated the original list now that
there are two dozens of new entries after v5.1's merge window.

Signed-off-by: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>
2019-04-12 12:49:54 +02:00
NeilBrown f7ad68bf98 rhashtable: rename rht_for_each*continue as *from.
The pattern set by list.h is that for_each..continue()
iterators start at the next entry after the given one,
while for_each..from() iterators start at the given
entry.

The rht_for_each*continue() iterators are documented as though the
start at the 'next' entry, but actually start at the given entry,
and they are used expecting that behaviour.
So fix the documentation and change the names to *from for consistency
with list.h

Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
Acked-by: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>
Signed-off-by: NeilBrown <neilb@suse.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-03-21 14:01:10 -07:00
Linus Torvalds dbc2fba3fc Merge branch 'work.iov_iter' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs
Pull iov_iter updates from Al Viro:
 "A couple of iov_iter patches - Christoph's crapectomy (the last
  remaining user of iov_for_each() went away with lustre, IIRC) and
  Eric'c optimization of sanity checks"

* 'work.iov_iter' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs:
  iov_iter: optimize page_copy_sane()
  uio: remove the unused iov_for_each macro
2019-03-12 13:43:42 -07:00
Jason Gunthorpe ea1075edcb RDMA: Add and use rdma_for_each_port
We have many loops iterating over all of the end port numbers on a struct
ib_device, simplify them with a for_each helper.

Reviewed-by: Parav Pandit <parav@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2019-02-19 10:13:39 -07:00
Jason Gunthorpe d901b2760d lib/scatterlist: Provide a DMA page iterator
Commit 2db76d7c3c ("lib/scatterlist: sg_page_iter: support sg lists w/o
backing pages") introduced the sg_page_iter_dma_address() function without
providing a way to use it in the general case. If the sg_dma_len() is not
equal to the sg length callers cannot safely use the
for_each_sg_page/sg_page_iter_dma_address combination.

Resolve this API mistake by providing a DMA specific iterator,
for_each_sg_dma_page(), that uses the right length so
sg_page_iter_dma_address() works as expected with all sglists.

A new iterator type is introduced to provide compile-time safety against
wrongly mixing accessors and iterators.

Acked-by: Christoph Hellwig <hch@lst.de> (for scatterlist)
Acked-by: Thomas Hellstrom <thellstrom@vmware.com>
Acked-by: Sakari Ailus <sakari.ailus@linux.intel.com> (ipu3-cio2)
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2019-02-11 15:02:33 -07:00
Christoph Hellwig 77000bc43d uio: remove the unused iov_for_each macro
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2019-02-04 10:59:50 -05:00
Jason Gunthorpe 99e309b6ed clang-format: Update .clang-format with the latest for_each macro list
Re-run the shell fragment that generated the original list. In particular
this adds the missing xarray related functions.

Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
Signed-off-by: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>
2019-01-19 19:26:06 +01:00
Matthew Wilcox 3ece58a270 page cache: Convert find_get_pages_contig to XArray
There's no direct replacement for radix_tree_for_each_contig()
in the XArray API as it's an unusual thing to do.  Instead,
open-code a loop using xas_next().  This removes the only user of
radix_tree_for_each_contig() so delete the iterator from the API and
the test suite code for it.

Signed-off-by: Matthew Wilcox <willy@infradead.org>
2018-10-21 10:46:34 -04:00
Jason Gunthorpe 7bee9bd21b clang-format: Set IndentWrappedFunctionNames false
The true option causes this indenting for functions:

static struct something_very_very_long *
    function(void *arg)
{

While a quick survey suggests that the usual Linux fallback is the GNU
style:

static struct something_very_very_long *
function(void *arg)
{

Eg as seen in:

kernel/cpu.c
kernel/fork.c
etc

Acked-by: Joe Perches <joe@perches.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
Signed-off-by: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>
2018-08-01 18:38:51 +02:00
Miguel Ojeda d4ef8d3ff0 clang-format: add configuration file
clang-format is a tool to format C/C++/...  code according to a set of
rules and heuristics.  Like most tools, it is not perfect nor covers
every single case, but it is good enough to be helpful.

In particular, it is useful for quickly re-formatting blocks of code
automatically, for reviewing full files in order to spot coding style
mistakes, typos and possible improvements.  It is also handy for sorting
``#includes``, for aligning variables and macros, for reflowing text and
other similar tasks.  It also serves as a teaching tool/guide for
newcomers.

The tool itself has been already included in the repositories of popular
Linux distributions for a long time.  The rules in this file are
intended for clang-format >= 4, which is easily available in most
distributions.

This commit adds the configuration file that contains the rules that the
tool uses to know how to format the code according to the kernel coding
style.  This gives us several advantages:

  * clang-format works out of the box with reasonable defaults;
    avoiding that everyone has to re-do the configuration.

  * Everyone agrees (eventually) on what is the most useful default
    configuration for most of the kernel.

  * If it becomes commonplace among kernel developers, clang-format
    may feel compelled to support us better. They already recognize
    the Linux kernel and its style in their documentation and in one
    of the style sub-options.

Some of clang-format's features relevant for the kernel are:

  * Uses clang's tooling support behind the scenes to parse and rewrite
    the code. It is not based on ad-hoc regexps.

  * Supports reasonably well the Linux kernel coding style.

  * Fast enough to be used at the press of a key.

  * There are already integrations (either built-in or third-party)
    for many common editors used by kernel developers (e.g. vim,
    emacs, Sublime, Atom...) that allow you to format an entire file
    or, more usefully, just your selection.

  * Able to parse unified diffs -- you can, for instance, reformat
    only the lines changed by a git commit.

  * Able to reflow text comments as well.

  * Widely supported and used by hundreds of developers in highly
    complex projects and organizations (e.g. the LLVM project itself,
    Chromium, WebKit, Google, Mozilla...). Therefore, it will be
    supported for a long time.

See more information about the tool at:

    https://clang.llvm.org/docs/ClangFormat.html
    https://clang.llvm.org/docs/ClangFormatStyleOptions.html

Link: http://lkml.kernel.org/r/20180318171632.qfkemw3mwbcukth6@gmail.com
Signed-off-by: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>
Cc: Randy Dunlap <rdunlap@infradead.org>
Cc: Andy Whitcroft <apw@canonical.com>
Cc: Joe Perches <joe@perches.com>
Cc: Jonathan Corbet <corbet@lwn.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-04-11 10:28:35 -07:00