10 Dec, 2008

1 commit

  • There is a DMA map/ unmap imbalance whenever a block write request
    packet is sent and then dequeued with ohci_cancel_packet. The latter
    may happen frequently if the AR resp tasklet is executed before the AT
    req tasklet for the same transaction.

    Add the missing dma_unmap_single. This fixes
    https://bugzilla.redhat.com/show_bug.cgi?id=475156

    Reported-by: Emmanuel Kowalski
    Tested-by: Emmanuel Kowalski
    Signed-off-by: Stefan Richter

    Stefan Richter
     

26 Nov, 2008

1 commit


31 Oct, 2008

1 commit


26 Oct, 2008

6 commits

  • 1: There is a small race between queue_delayed_work() and its
    corresponding kref_get(). Do the kref_get first, and _put it again
    if the queue_delayed_work() failed, so there is no chance of the
    kref going to zero while the work is scheduled.
    2: An SBP2_LOGOUT_REQUEST could be sent out with a login_id full of
    garbage. Initialize it to an invalid value so we can tell if we
    ever got a valid login_id.
    3: The node ID and generation may have changed but the new values may
    not yet have been recorded in lu and tgt when the final logout is
    attempted. Use the latest values from the device in
    sbp2_release_target().

    Signed-off-by: Jay Fenlason
    Signed-off-by: Stefan Richter

    Jay Fenlason
     
  • This optimizes firewire-sbp2's device probe for the case that the local
    node and the SBP-2 node were discovered at the same time. In this case,
    fw-core's bus management work and fw-sbp2's login and SCSI probe work
    are scheduled in parallel (in the globally shared workqueue and in
    fw-sbp2's workqueue, respectively). The bus reset from fw-core may then
    disturb and extremely delay the login and SCSI probe because the latter
    fails with several command timeouts and retries and has to be retried
    from scratch.

    We avoid this particular situation of sbp2_login() and fw_card_bm_work()
    running in parallel by delaying the first sbp2_login() a little bit.

    This is meant to be a short-term fix for
    https://bugzilla.redhat.com/show_bug.cgi?id=466679. In the long run,
    the SCSI probe, i.e. fw-sbp2's call of __scsi_add_device(), should be
    parallelized with sbp2_reconnect().

    Problem reported and fix tested and confirmed by Alex Kanavin.

    Signed-off-by: Stefan Richter

    Stefan Richter
     
  • Fix leaks when pci_probe fails. Simplify error log strings.

    Signed-off-by: Stefan Richter

    Stefan Richter
     
  • The transmit and receive context dma memory was not being freed on
    module removal. Neither was the config rom memory. Fix that.

    The ab->next assignment is pure paranoia.

    Signed-off-by: Jay Fenlason
    Signed-off-by: Stefan Richter

    Jay Fenlason
     
  • With the bus_resets patch applied, it is easy to see this memory leak
    by repeatedly resetting the firewire bus while running slabtop in
    another window. Just watch kmalloc-32 grow and grow...

    Signed-off-by: Jay Fenlason
    Signed-off-by: Stefan Richter

    Jay Fenlason
     
  • The "color" is used during the topology building after a bus reset,
    hovever in "struct fw_node"s it is stored in a u8, but in struct fw_card
    it is stored in an int. When the value wraps in one struct, but not
    the other, disaster strikes.

    Signed-off-by: Jay Fenlason

    Fixes http://bugzilla.kernel.org/show_bug.cgi?id=10922.

    Signed-off-by: Stefan Richter

    Jay Fenlason
     

16 Oct, 2008

5 commits

  • Reported by Jay Fenlason: ioctl() did not return as intended
    - the size of data read into ioctl_send_request,
    - the number of datagrams enqueued by ioctl_queue_iso.

    Signed-off-by: Stefan Richter

    Stefan Richter
     
  • Reported by Jay Fenlason:
    The iso packet control accessors in fw-cdev.c had bogus masks.

    Signed-off-by: Stefan Richter

    Stefan Richter
     
  • queuecommand() looked at the remote and local node IDs before it read
    the bus generation. The corresponding race with sbp2_reconnect updating
    these data was probably impossible to happen though because the current
    code blocks the SCSI layer during reconnection. However, better safe
    than sorry, especially if someone later improves the code to not block
    the SCSI layer.

    Signed-off-by: Stefan Richter

    Stefan Richter
     
  • 1. We don't need to round the SBP-2 segment size limit down to a
    multiple of 4 kB (0xffff -> 0xf000). It is only necessary to
    ensure quadlet alignment (0xffff -> 0xfffc).

    2. Use dma_set_max_seg_size() to tell the DMA mapping infrastructure
    and the block IO layer about the restriction. This way we can
    remove the size checks and segment splitting in the queuecommand
    path.

    This assumes that no other code in the firewire stack uses
    dma_map_sg() with conflicting requirements. It furthermore assumes
    that the controller device's platform actually allows us to set the
    segment size to our liking. Assert the latter with a BUG_ON().

    3. Also use blk_queue_max_segment_size() to tell the block IO layer
    about it. It cannot know it because our scsi_add_host() does not
    point to the FireWire controller's device.

    Thanks to Grant Grundler and FUJITA Tomonori for advice.

    Signed-off-by: Stefan Richter

    Stefan Richter
     
  • Share code between fw_send_request + wait_for_completion callers.

    Signed-off-by: Jay Fenlason

    Addendum:
    Removes an unnecessary struct and an ununsed retry loop.
    Calls it fw_run_transaction() instead of fw_send_request_sync().

    Signed-off-by: Stefan Richter
    Acked-by: Kristian Høgsberg

    Jay Fenlason
     

20 Aug, 2008

1 commit


07 Aug, 2008

1 commit


03 Aug, 2008

1 commit

  • Recently, a bug having to do with the alignment of transaction response
    data was fixed. However, some apps such as libdc1394 relied on the
    presence of that bug in order to function correctly. In order to stay
    compatible with old versions of those apps, this patch preserves the bug
    in cases where it is harmless to normal operation (such as the single
    quadlet read) due to a simple duplication of data. This guarantees
    maximum compatability for those users who are using the old app with the
    fixed kernel.

    Signed-off-by: David Moore
    Signed-off-by: Stefan Richter

    David Moore
     

28 Jul, 2008

1 commit

  • * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/ieee1394/linux1394-2.6:
    firewire: state userland requirements in Kconfig help
    firewire: avoid memleak after phy config transmit failure
    firewire: fw-ohci: TSB43AB22/A dualbuffer workaround
    firewire: queue the right number of data
    firewire: warn on unfinished transactions during card removal
    firewire: small fw_fill_request cleanup
    firewire: fully initialize fw_transaction before marking it pending
    firewire: fix race of bus reset with request transmission

    Linus Torvalds
     

27 Jul, 2008

1 commit

  • Add per-device dma_mapping_ops support for CONFIG_X86_64 as POWER
    architecture does:

    This enables us to cleanly fix the Calgary IOMMU issue that some devices
    are not behind the IOMMU (http://lkml.org/lkml/2008/5/8/423).

    I think that per-device dma_mapping_ops support would be also helpful for
    KVM people to support PCI passthrough but Andi thinks that this makes it
    difficult to support the PCI passthrough (see the above thread). So I
    CC'ed this to KVM camp. Comments are appreciated.

    A pointer to dma_mapping_ops to struct dev_archdata is added. If the
    pointer is non NULL, DMA operations in asm/dma-mapping.h use it. If it's
    NULL, the system-wide dma_ops pointer is used as before.

    If it's useful for KVM people, I plan to implement a mechanism to register
    a hook called when a new pci (or dma capable) device is created (it works
    with hot plugging). It enables IOMMUs to set up an appropriate
    dma_mapping_ops per device.

    The major obstacle is that dma_mapping_error doesn't take a pointer to the
    device unlike other DMA operations. So x86 can't have dma_mapping_ops per
    device. Note all the POWER IOMMUs use the same dma_mapping_error function
    so this is not a problem for POWER but x86 IOMMUs use different
    dma_mapping_error functions.

    The first patch adds the device argument to dma_mapping_error. The patch
    is trivial but large since it touches lots of drivers and dma-mapping.h in
    all the architecture.

    This patch:

    dma_mapping_error() doesn't take a pointer to the device unlike other DMA
    operations. So we can't have dma_mapping_ops per device.

    Note that POWER already has dma_mapping_ops per device but all the POWER
    IOMMUs use the same dma_mapping_error function. x86 IOMMUs use device
    argument.

    [akpm@linux-foundation.org: fix sge]
    [akpm@linux-foundation.org: fix svc_rdma]
    [akpm@linux-foundation.org: build fix]
    [akpm@linux-foundation.org: fix bnx2x]
    [akpm@linux-foundation.org: fix s2io]
    [akpm@linux-foundation.org: fix pasemi_mac]
    [akpm@linux-foundation.org: fix sdhci]
    [akpm@linux-foundation.org: build fix]
    [akpm@linux-foundation.org: fix sparc]
    [akpm@linux-foundation.org: fix ibmvscsi]
    Signed-off-by: FUJITA Tomonori
    Cc: Muli Ben-Yehuda
    Cc: Andi Kleen
    Cc: Thomas Gleixner
    Cc: Ingo Molnar
    Cc: Avi Kivity
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    FUJITA Tomonori
     

26 Jul, 2008

2 commits


25 Jul, 2008

1 commit

  • Isochronous reception in dualbuffer mode is reportedly broken with
    TI TSB43AB22A on x86-64. Descriptor addresses above 2G have been
    determined as the trigger:
    https://bugzilla.redhat.com/show_bug.cgi?id=435550

    Two fixes are possible:
    - pci_set_consistent_dma_mask(pdev, DMA_31BIT_MASK);
    at least when IR descriptors are allocated, or
    - simply don't use dualbuffer.
    This fix implements the latter workaround.

    But we keep using dualbuffer on x86-32 which won't give us highmen (and
    thus physical addresses outside the 31bit range) in coherent DMA memory
    allocations. Right now we could for example also whitelist PPC32, but
    DMA mapping implementation details are expected to change there.

    Signed-off-by: Stefan Richter
    Signed-off-by: Jarod Wilson

    Stefan Richter
     

20 Jul, 2008

1 commit

  • There will be 4 padding bytes in struct fw_cdev_event_response on some platforms
    The member:__u32 data will point to these padding bytes. While queue the
    response and data in complete_transaction in fw-cdev.c, it will queue like this:
    |response(excluding padding bytes)|4 padding bytes|4 padding bytes|data.
    It queue 4 extra bytes. That is to say it use "&response + sizeof(response)"
    while other place of kernel and userspace library use "&response + offsetof
    (typeof(response), data)". So it will lost the last 4 bytes of data. This patch
    can fix it while not changing the struct definition.

    Signed-off-by: JiSheng Zhang

    This fixes responses to outbound block read requests on 64bit architectures.
    Tested on i686, x86-64, and x86-64 with i686 userland, using firecontrol and
    gscanbus.

    Signed-off-by: Stefan Richter

    JiSheng Zhang
     

16 Jul, 2008

1 commit


14 Jul, 2008

11 commits

  • After card->done and card->work are completed, any remaining pending
    request would be a bug. We cannot safely complete a transaction at
    that point anymore.

    IOW card users must not drop their last fw_card reference (usually
    indirect references through fw_device references) before their last
    outbound transaction through that card was finished.

    Signed-off-by: Stefan Richter

    Stefan Richter
     
  • - better name for a function argument
    - removal of a local variable which became unnecessary after
    "fully initialize fw_transaction before marking it pending"

    Signed-off-by: Stefan Richter

    Stefan Richter
     
  • In theory, card->flush_timer could already access a transaction between
    fw_send_request()'s spin_unlock_irqrestore and the rest of what happens
    in fw_send_request(). This would happen if the process which sends the
    request is preempted and put to sleep right after spin_unlock_irqrestore
    for longer than 100ms.

    Therefore we fill in everything in struct fw_transaction at which the
    flush_timer might look at before we lift the lock.

    To do: Ensure that the timer does not pick up the transaction before
    the time of the AT request event plus split transaction timeout.

    Signed-off-by: Stefan Richter

    Stefan Richter
     
  • Reported by Jay Fenlason: A bus reset tasklet may call
    fw_flush_transactions and touch transactions (call their callback which
    will free them) while the context which submitted the transaction is
    still inserting it into the transmission queue.

    A simple solution to this problem is to _not_ "flush" the transactions
    because of a bus reset (complete the transcations as 'cancelled'). They
    will now simply time out (completed as 'cancelled' by the split-timeout
    timer).

    Jay Fenlason thought of this fix too but I was quicker to type it out.
    :-)

    Background:
    Contexts which access an instance of struct fw_transaction are:
    1. the submitter, until it inserted the packet which is embedded in the
    transaction into the AT req DMA,
    2. the AsReqTrContext tasklet when the request packet was acked by the
    responder node or transmission to the responder failed,
    3. the AsRspRcvContext tasklet when it found a request which matched
    an incoming response,
    4. the card->flush_timer when it picks up timed-out transactions to
    cancel them,
    5. the bus reset tasklet when it cancels transactions (this access is
    eliminated by this patch),
    6. a process which shuts down an fw_card (unregisters it from fw-core
    when the controller is unbound from fw-ohci) --- although in this
    case there shouldn't really be any transactions anymore because we
    wait until all card users finished their business with the card.

    All of these contexts run concurrently (except for the 6th, presumably).
    The 1st is safe against the 2nd and 3rd because of the way how a request
    packet is carefully submitted to the hardware. A race between 2nd and
    3rd has been fixed a while ago (bug 9617). The 4th is almost safe
    against 1st, 2nd, 3rd; there are issues with it if huge scheduling
    latencies occur, to be fixed separately. The 5th looks safe against
    2nd, 3rd, and 4th but is unsafe against 1st. Maybe this could be fixed
    with an explicit state variable in struct fw_transaction. But this
    would require fw_transaction to be rewritten as only dynamically
    allocatable object with reference counting --- not a good solution if we
    also can simply kill this 5th accessing context (replace it by the 4th).

    Signed-off-by: Stefan Richter

    Stefan Richter
     
  • Contrary to a comment in the source, request->ack of a broadcast write
    request can be ACK_PENDING. Hence the existing check is insufficient.

    Debug dmesg before:
    AR spd 0 tl 00, ffc0 -> ffff, ack_pending , QW req, fffff0000234 = ffffffff
    AT spd 0 tl 00, ffff -> ffc0, ack_complete, W resp
    And the requesting node (linux1394) reports an unsolicited response.

    Debug dmesg after:
    AR spd 0 tl 00, ffc0 -> ffff, ack_pending , QW req, fffff0000234 = ffffffff

    Signed-off-by: Stefan Richter

    Stefan Richter
     
  • This is a functionally equivalent replacement of the current reference
    counting of struct fw_card instances. It only converts it to common
    idioms as suggested by Kristian Høgsberg:
    - struct kref replaces atomic_t as the counter.
    - wait_for_completion is used to wait for all card users to complete.

    BTW, it may make sense to count card->flush_timer and card->work as
    card users too.

    Signed-off-by: Stefan Richter

    Stefan Richter
     
  • Signed-off-by: Stefan Richter

    Stefan Richter
     
  • Signed-off-by: Stefan Richter

    Stefan Richter
     
  • See IEEE 1394a clause 8.3.2.3.11.

    Signed-off-by: Stefan Richter

    Stefan Richter
     
  • This instructs sd_mod to send START STOP UNIT on suspend and resume,
    and on driver unbinding or unloading (including when the system is shut
    down).

    We don't do this though if multiple initiators may log in to the target.

    Signed-off-by: Stefan Richter
    Tested-by: Tino Keitel

    Stefan Richter
     
  • Reported by Tino Keitel: PL-3507 with firmware from Prolific does not
    spin down the disk on START STOP UNIT with power condition = 0 and start
    = 0. It does however work with power condition = 2 or 3.

    Also found while investigating this: DViCO Momobay CX-1 and FX-3A (TI
    TSB42AA9/A based) become unresponsive after START STOP UNIT with power
    condition = 0 and start = 0. They stay responsive if power condition is
    set when stopping the motor.

    Signed-off-by: Stefan Richter
    Tested-by: Tino Keitel

    Stefan Richter
     

28 Jun, 2008

1 commit


19 Jun, 2008

4 commits

  • Emphasize the recommendation to build only one stack.
    Trim the prompts to better fit into short attention spans.

    Signed-off-by: Stefan Richter

    Stefan Richter
     
  • If the low-level driver failed to initialize a card properly without
    noticing it, fw-core was blocked indefinitely when trying to send a
    PHY config packet. This hung up the events kernel thread, e.g. locked
    up keyboard input.
    https://bugzilla.redhat.com/show_bug.cgi?id=444694
    https://bugzilla.redhat.com/show_bug.cgi?id=446763

    This problem was introduced between 2.6.25 and 2.6.26-rc1 by commit
    2a0a2590498be7b92e3e76409c9b8ee722e23c8f "firewire: wait until PHY
    configuration packet was transmitted (fix bus reset loop)".

    The solution is to wait with timeout. I tested it with 7 different
    working controllers and 1 non-working controller. On the working ones,
    the packet callback complete()s usually --- but not always --- before a
    timeout of 10ms. Hence I chose a safer timeout of 100ms.

    On the few tests with the non-working controller ALi M5271, PHY config
    packet transmission always timed out so far. (Fw-ohci needs to be fixed
    for this controller independently of this deadline fix. Often the core
    doesn't even attempt to send a phy config because not even self ID
    reception works.)

    Signed-off-by: Stefan Richter

    Stefan Richter
     
  • The messages which can be enabled by fw-ohci's debug module parameter
    are changed from KERN_DEBUG to KERN_NOTICE level and uniformly prefixed
    with "firewire_ohci: ". This further simplifies communication with
    users when we ask them to capture debug messages.

    Signed-off-by: Stefan Richter

    Stefan Richter
     
  • Callers of fill_bus_reset_event() have to take card->lock. Otherwise
    access to node data may oops if node removal is in progress.

    A lockless alternative would be

    - event->local_node_id = card->local_node->node_id;
    + tmp = fw_node_get(card->local_node);
    + event->local_node_id = tmp->node_id;
    + fw_node_put(tmp);

    and ditto with the other node pointers which fill_bus_reset_event()
    accesses. But I went the locked route because one of the two callers
    already holds the lock. As a bonus, we don't need the memory barrier
    anymore because device->generation and device->node_id are written in
    a card->lock protected section.

    Signed-off-by: Stefan Richter
    Signed-off-by: Kristian Høgsberg

    Stefan Richter