LKML Archive on
help / color / mirror / Atom feed
From: Laurent Pinchart <>
To: Dave Airlie <>
Cc: Greg Kroah-Hartman <>,
	Jason Gunthorpe <>,
	Daniel Vetter <>,
	Linus Torvalds <>,
	Oded Gabbay <>,
	LKML <>
Subject: Re: [git pull] habanalabs pull request for kernel 5.15
Date: Mon, 23 Aug 2021 02:06:58 +0300	[thread overview]
Message-ID: <> (raw)
In-Reply-To: <>

On Fri, Aug 20, 2021 at 04:48:18AM +1000, Dave Airlie wrote:
> On Fri, 20 Aug 2021 at 03:07, Greg KH wrote:
> > On Thu, Aug 19, 2021 at 02:02:09PM +0300, Oded Gabbay wrote:
> > > Hi Greg,
> > >
> > > This is habanalabs pull request for the merge window of kernel 5.15.
> > > The commits divide roughly 50/50 between adding new features, such
> > > as peer-to-peer support with DMA-BUF or signaling from within a graph,
> > > and fixing various bugs, small improvements, etc.
> >
> > Pulled and pushed out, thanks!
> NAK for adding dma-buf or p2p support to this driver in the upstream
> kernel. There needs to be a hard line between
> "I-can't-believe-its-not-a-drm-driver" drivers which bypass our
> userspace requirements, and I consider this the line.
> This driver was merged into misc on the grounds it wasn't really a
> drm/gpu driver and so didn't have to accept our userspace rules.
> Adding dma-buf/p2p support to this driver is showing it really fits
> the gpu driver model and should be under the drivers/gpu rules since
> what are most GPUs except accelerators.
> We are opening a major can of worms (some would say merging habanalabs
> driver opened it), but this places us in the situation that if a GPU
> vendor just claims their hw is a "vector" accelerator they can use
> Greg to bypass all the work that been done to ensure we have
> maintainability long term. I don't want drivers in the tree using
> dma-buf to interact with other drivers when we don't have access to a
> userspace project to validate the kernel driver assumptions.

I can only voice the strongest agreement here. This is a situation that
is only too familiar and that we're facing in the camera world as well.
For the past ten years, the camera community has worked hard to build
bridges with hardware vendors. The public development in the kernel tree
is only the visible part of the iceberg, lots of efforts have been put
in reaching out, teaching and helping. A few years ago the libcamera
project got started to offer a userspace framework to device vendors
where they can contribute code, similar to Mesa for graphics (and
related) acceleration.

I can't emphasize strongly enough how much effort it took to start
getting vendors on board, and the situation is still fragile at best. If
we now send a message that all of this can be bypassed by merging code
that ignores all rules in drivers/misc/, it would be ten years of
completely wasted work. Beside the technical impact, the effect on the
motivation of the kernel and userspace communities we have slowly built
over time would be catastrophic.


Laurent Pinchart

  parent reply	other threads:[~2021-08-22 23:07 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-08-19 11:02 Oded Gabbay
2021-08-19 17:04 ` Greg KH
2021-08-19 18:48   ` Dave Airlie
2021-08-20  6:43     ` Daniel Vetter
2021-08-20 10:02     ` Greg Kroah-Hartman
2021-08-22 23:06     ` Laurent Pinchart [this message]
2021-08-25  1:16     ` Jeffrey Hugo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \ \ \ \ \ \ \ \ \ \
    --subject='Re: [git pull] habanalabs pull request for kernel 5.15' \

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).