14 Oct, 2007

3 commits

  • We were simply concatenating the devhandle and devino and using that
    as the cookie, which defeats the entire purpose of the VIRQ hypervisor
    interfaces.

    Now that we use physical addresses for the INO buckets, we can
    allocate them dynamically for VIRQs and encode the cookies as
    ~__pa(bucket). This allows us to test for and decode the cookie with
    a simple:

    brlz $reg1, 1f
    xnor $reg1, %g0, $reg2

    sequence.

    This works because bit 64 is never set in traditional
    INO vectors, and it is also never set in a physical
    address. So xnor'ing the physical address of the bucket
    always gives us a negative number, and thus a unique
    condition we can test cheaply.

    Inspired by ideas from Greg Onufer.

    Signed-off-by: David S. Miller

    David S. Miller
     
  • Signed-off-by: David S. Miller

    David S. Miller
     
  • Currently we chain IVEC entries using 32-bit "pointers"
    because we know that the ivector_table is in the main
    kernel image, thus below 4GB.

    This uses proper 64-bit pointers instead.

    Whilst this bloats up the kernel image size, this sets
    the infrastructure necessary to significantly shrink the
    kernel size by using physical addresses and dynamically
    allocating the ivector table.

    Signed-off-by: David S. Miller

    David S. Miller
     

29 May, 2007

1 commit


10 Dec, 2006

1 commit


20 Jun, 2006

2 commits

  • This is the long overdue conversion of sparc64 over to
    the generic IRQ layer.

    The kernel image is slightly larger, but the BSS is ~60K
    smaller due to the reduced size of struct ino_bucket.

    A lot of IRQ implementation details, including ino_bucket,
    were moved out of asm-sparc64/irq.h and are now private to
    arch/sparc64/kernel/irq.c, and most of the code in irq.c
    totally disappeared.

    One thing that's different at the moment is IRQ distribution,
    we do it at enable_irq() time. If the cpu mask is ALL then
    we round-robin using a global rotating cpu counter, else
    we pick the first cpu in the mask to support single cpu
    targetting. This is similar to what powerpc's XICS IRQ
    support code does.

    This works fine on my UP SB1000, and the SMP build goes
    fine and runs on that machine, but lots of testing on
    different setups is needed.

    Signed-off-by: David S. Miller

    David S. Miller
     
  • This is the first in a series of cleanups that will hopefully
    allow a seamless attempt at using the generic IRQ handling
    infrastructure in the Linux kernel.

    Define PIL_DEVICE_IRQ and vector all device interrupts through
    there.

    Get rid of the ugly pil0_dummy_{bucket,desc}, instead vector
    the timer interrupt directly to a specific handler since the
    timer interrupt is the only event that will be signaled on
    PIL 14.

    The irq_worklist is now in the per-cpu trap_block[].

    Signed-off-by: David S. Miller

    David S. Miller
     

20 Mar, 2006

3 commits