06 Apr, 2011

2 commits


27 Mar, 2011

1 commit

  • The conversion to blk_delay_queue() missed parts of IDE.
    Add a blk_delay_queue() to ensure that the request handler
    gets reinvoked when it needs to.

    Note that in all but one place the old plug re-run delay of
    3 msecs is used, even though it probably could be shorter
    for performance reasons in some of those cases.

    Reported-by: Geert Uytterhoeven
    Signed-off-by: Jens Axboe

    Jens Axboe
     

10 Mar, 2011

1 commit

  • Code has been converted over to the new explicit on-stack plugging,
    and delay users have been converted to use the new API for that.
    So lets kill off the old plugging along with aops->sync_page().

    Signed-off-by: Jens Axboe

    Jens Axboe
     

10 Sep, 2010

1 commit

  • Unplugging from a request function doesn't really help much (it's
    already in the request_fn) and soon block layer will be updated to mix
    barrier sequence with other commands, so there's no need to treat
    queue flushing any differently.

    ide was the only user of blk_queue_flushing(). Remove it.

    Signed-off-by: Tejun Heo
    Cc: Christoph Hellwig
    Acked-by: David S. Miller
    Signed-off-by: Jens Axboe

    Tejun Heo
     

08 Aug, 2010

2 commits


01 Apr, 2010

1 commit

  • I noticed that my KVM virtual machines were experiencing IDE
    issues resulting in processes stuck on waiting for buffers to
    complete.

    The root cause is of course race conditions in the ancient qemu
    backend that I'm using. However, the fact that the guest isn't
    recovering is a bug.

    I've tracked it down to the change made last year to dequeue
    requests at the start rather than at the end in the IDE layer.

    commit 8f6205cd572fece673da0255d74843680f67f879
    Author: Tejun Heo
    Date: Fri May 8 11:53:59 2009 +0900

    ide: dequeue in-flight request

    The problem is that the function ide_dma_timeout_retry does not
    requeue the current request, causing one request to be lost for
    each DMA timeout.

    This patch fixes this by requeueing the request.

    Signed-off-by: Herbert Xu
    Signed-off-by: David S. Miller

    Herbert Xu
     

07 Jul, 2009

1 commit


26 Jun, 2009

2 commits


24 Jun, 2009

1 commit


16 Jun, 2009

3 commits


13 Jun, 2009

1 commit

  • * 'for-2.6.31' of git://git.kernel.org/pub/scm/linux/kernel/git/bart/ide-2.6: (29 commits)
    ide: re-implement ide_pci_init_one() on top of ide_pci_init_two()
    ide: unexport ide_find_dma_mode()
    ide: fix PowerMac bootup oops
    ide: skip probe if there are no devices on the port (v2)
    sl82c105: add printk() logging facility
    ide-tape: fix proc warning
    ide: add IDE_DFLAG_NIEN_QUIRK device flag
    ide: respect quirk_drives[] list on all controllers
    hpt366: enable all quirks for devices on quirk_drives[] list
    hpt366: sync quirk_drives[] list with pdc202xx_{new,old}.c
    ide: remove superfluous SELECT_MASK() call from do_rw_taskfile()
    ide: remove superfluous SELECT_MASK() call from ide_driveid_update()
    icside: remove superfluous ->maskproc method
    ide-tape: fix IDE_AFLAG_* atomic accesses
    ide-tape: change IDE_AFLAG_IGNORE_DSC non-atomically
    pdc202xx_old: kill resetproc() method
    pdc202xx_old: don't call pdc202xx_reset() on IRQ timeout
    pdc202xx_old: use ide_dma_test_irq()
    ide: preserve Host Protected Area by default (v2)
    ide-gd: implement block device ->set_capacity method (v2)
    ...

    Linus Torvalds
     

07 Jun, 2009

1 commit


23 May, 2009

1 commit


22 May, 2009

2 commits

  • Bartlomiej Zolnierkiewicz
     
  • On Tuesday 19 May 2009 20:29:28 Martin Lottermoser wrote:

    > hdc: cdrom_decode_status: error=0x40 { LastFailedSense=0x04 }
    > ide: failed opcode was: unknown
    > hdc: DMA disabled
    > ------------[ cut here ]------------
    > kernel BUG at drivers/ide/ide-io.c:872!

    It is possible for ide-cd to ignore ide_error()'s return value under
    some circumstances. Workaround it in ide_intr() and ide_timer_expiry()
    by checking if there is a device/port reset pending currently.

    Fixes bug #13345:

    http://bugzilla.kernel.org/show_bug.cgi?id=13345

    Reported-by: Martin Lottermoser
    Reported-and-tested-by: Modestas Vainius
    Cc: Borislav Petkov
    Signed-off-by: Bartlomiej Zolnierkiewicz

    Bartlomiej Zolnierkiewicz
     

18 May, 2009

3 commits


11 May, 2009

5 commits

  • Till now block layer allowed two separate modes of request execution.
    A request is always acquired from the request queue via
    elv_next_request(). After that, drivers are free to either dequeue it
    or process it without dequeueing. Dequeue allows elv_next_request()
    to return the next request so that multiple requests can be in flight.

    Executing requests without dequeueing has its merits mostly in
    allowing drivers for simpler devices which can't do sg to deal with
    segments only without considering request boundary. However, the
    benefit this brings is dubious and declining while the cost of the API
    ambiguity is increasing. Segment based drivers are usually for very
    old or limited devices and as converting to dequeueing model isn't
    difficult, it doesn't justify the API overhead it puts on block layer
    and its more modern users.

    Previous patches converted all block low level drivers to dequeueing
    model. This patch completes the API transition by...

    * renaming elv_next_request() to blk_peek_request()

    * renaming blkdev_dequeue_request() to blk_start_request()

    * adding blk_fetch_request() which is combination of peek and start

    * disallowing completion of queued (not started) requests

    * applying new API to all LLDs

    Renamings are for consistency and to break out of tree code so that
    it's apparent that out of tree drivers need updating.

    [ Impact: block request issue API cleanup, no functional change ]

    Signed-off-by: Tejun Heo
    Cc: Rusty Russell
    Cc: James Bottomley
    Cc: Mike Miller
    Cc: unsik Kim
    Cc: Paul Clements
    Cc: Tim Waugh
    Cc: Geert Uytterhoeven
    Cc: David S. Miller
    Cc: Laurent Vivier
    Cc: Jeff Garzik
    Cc: Jeremy Fitzhardinge
    Cc: Grant Likely
    Cc: Adrian McMenamin
    Cc: Stephen Rothwell
    Cc: Bartlomiej Zolnierkiewicz
    Cc: Borislav Petkov
    Cc: Sergei Shtylyov
    Cc: Alex Dubov
    Cc: Pierre Ossman
    Cc: David Woodhouse
    Cc: Markus Lidel
    Cc: Stefan Weinhuber
    Cc: Martin Schwidefsky
    Cc: Pete Zaitcev
    Cc: FUJITA Tomonori
    Signed-off-by: Jens Axboe

    Tejun Heo
     
  • ide generally has single request in flight and tracks it using
    hwif->rq and all state handlers follow the following convention.

    * ide_started is returned if the request is in flight.

    * ide_stopped is returned if the queue needs to be restarted. The
    request might or might not have been processed fully or partially.

    * hwif->rq is set to NULL, when an issued request completes.

    So, dequeueing model can be implemented by dequeueing after fetch,
    requeueing if hwif->rq isn't NULL on ide_stopped return and doing
    about the same thing on completion / port unlock paths. These changes
    can be made in ide-io proper.

    In addition to the above main changes, the following updates are
    necessary.

    * ide-cd shouldn't dequeue a request when issuing REQUEST SENSE for it
    as the request is already dequeued.

    * ide-atapi uses request queue as stack when issuing REQUEST SENSE to
    put the REQUEST SENSE in front of the failed request. This now
    needs to be done using requeueing.

    [ Impact: dequeue in-flight request ]

    Signed-off-by: Tejun Heo
    Cc: Bartlomiej Zolnierkiewicz
    Cc: Borislav Petkov
    Cc: Sergei Shtylyov
    Signed-off-by: Jens Axboe

    Tejun Heo
     
  • With recent unification of fields, it's now guaranteed that
    rq->data_len always equals blk_rq_bytes(). Convert all direct users
    to accessors.

    [ Impact: convert direct rq->data_len usages to blk_rq_bytes() ]

    Signed-off-by: Tejun Heo
    Acked-by: Bartlomiej Zolnierkiewicz
    Cc: Borislav Petkov
    Cc: Sergei Shtylyov
    Signed-off-by: Jens Axboe

    Tejun Heo
     
  • ide doesn't manipulate request fields anymore and thus all hard and
    their soft equivalents are always equal. Convert all references to
    accessors.

    [ Impact: use pos and nr_sectors accessors ]

    Signed-off-by: Tejun Heo
    Acked-by: Bartlomiej Zolnierkiewicz
    Cc: Borislav Petkov
    Cc: Sergei Shtylyov
    Signed-off-by: Jens Axboe

    Tejun Heo
     
  • Implement accessors - blk_rq_pos(), blk_rq_sectors() and
    blk_rq_cur_sectors() which return rq->hard_sector, rq->hard_nr_sectors
    and rq->hard_cur_sectors respectively and convert direct references of
    the said fields to the accessors.

    This is in preparation of request data length handling cleanup.

    Geert : suggested adding const to struct request * parameter to accessors
    Sergei : spotted error in patch description

    [ Impact: cleanup ]

    Signed-off-by: Tejun Heo
    Acked-by: Geert Uytterhoeven
    Acked-by: Stephen Rothwell
    Tested-by: Grant Likely
    Acked-by: Grant Likely
    Ackec-by: Sergei Shtylyov
    Cc: Bartlomiej Zolnierkiewicz
    Cc: Borislav Petkov
    Cc: James Bottomley
    Signed-off-by: Jens Axboe

    Tejun Heo
     

28 Apr, 2009

4 commits

  • Impact: remove code path which is no longer necessary

    All IDE data transfers now use rq->bio. Simplify ide_map_sg()
    accordingly.

    Signed-off-by: Tejun Heo
    Cc: Jens Axboe

    Tejun Heo
     
  • Impact: cleanup rq->data usage

    ide-pm uses rq->data to carry pointer to struct request_pm_state
    through request queue and rq->special is used to carray pointer to
    local struct ide_cmd, which isn't necessary. Use rq->special for
    request_pm_state instead and use local ide_cmd in
    ide_start_power_step().

    Signed-off-by: Tejun Heo
    Cc: Jens Axboe

    Tejun Heo
     
  • Impact: unify request data buffer handling

    rq->data is used mostly to pass kernel buffer through request queue
    without using bio. There are only a couple of places which still do
    this in kernel and converting to bio isn't difficult.

    This patch converts ide-cd and atapi to use bio instead of rq->data
    for request sense and internal pc commands. With previous change to
    unify sense request handling, this is relatively easily achieved by
    adding blk_rq_map_kern() during sense_rq prep and PC issue.

    If blk_rq_map_kern() fails for sense, the error is deferred till sense
    issue and aborts the failed command which triggered the sense. Note
    that this is a slim possibility as sense prep is done on each command
    issue, so for the above condition to actually trigger, all preps since
    the last sense issue till the issue of the request which would require
    a sense should fail.

    * do_request functions might sleep now. This should be okay as ide
    request_fn - do_ide_request() - is invoked only from make_request
    and plug work. Make sure this is the case by adding might_sleep()
    to do_ide_request().

    * Functions which access the read sense data before the sense request
    is complete now should access bio_data(sense_rq->bio) as the sense
    buffer might have been copied during blk_rq_map_kern().

    * ide-tape updated to map sg.

    * cdrom_do_block_pc() now doesn't have to deal with REQ_TYPE_ATA_PC
    special case. Simplified.

    * tp_ops->output/input_data path dropped from ide_pc_intr().

    Signed-off-by: Tejun Heo

    Tejun Heo
     
  • Impact: rq->buffer usage cleanup

    ide_raw_taskfile() directly uses rq->buffer to carry pointer to the
    data buffer. This complicates both block interface and ide backend
    request handling. Use blk_rq_map_kern() instead and drop special
    handling for REQ_TYPE_ATA_TASKFILE from ide_map_sg().

    Note that REQ_RW setting is moved upwards as blk_rq_map_kern() uses it
    to initialize bio rw flag.

    Signed-off-by: Tejun Heo
    Cc: Jens Axboe

    Tejun Heo
     

19 Apr, 2009

4 commits

  • Impact: remove code path which is no longer necessary

    All IDE data transfers now use rq->bio. Simplify ide_map_sg()
    accordingly.

    Signed-off-by: Tejun Heo
    Cc: Jens Axboe

    Tejun Heo
     
  • Impact: cleanup rq->data usage

    ide-pm uses rq->data to carry pointer to struct request_pm_state
    through request queue and rq->special is used to carray pointer to
    local struct ide_cmd, which isn't necessary. Use rq->special for
    request_pm_state instead and use local ide_cmd in
    ide_start_power_step().

    Signed-off-by: Tejun Heo
    Cc: Jens Axboe

    Tejun Heo
     
  • Impact: unify request data buffer handling

    rq->data is used mostly to pass kernel buffer through request queue
    without using bio. There are only a couple of places which still do
    this in kernel and converting to bio isn't difficult.

    This patch converts ide-cd and atapi to use bio instead of rq->data
    for request sense and internal pc commands. With previous change to
    unify sense request handling, this is relatively easily achieved by
    adding blk_rq_map_kern() during sense_rq prep and PC issue.

    If blk_rq_map_kern() fails for sense, the error is deferred till sense
    issue and aborts the failed command which triggered the sense. Note
    that this is a slim possibility as sense prep is done on each command
    issue, so for the above condition to actually trigger, all preps since
    the last sense issue till the issue of the request which would require
    a sense should fail.

    * do_request functions might sleep now. This should be okay as ide
    request_fn - do_ide_request() - is invoked only from make_request
    and plug work. Make sure this is the case by adding might_sleep()
    to do_ide_request().

    * Functions which access the read sense data before the sense request
    is complete now should access bio_data(sense_rq->bio) as the sense
    buffer might have been copied during blk_rq_map_kern().

    * ide-tape updated to map sg.

    * cdrom_do_block_pc() now doesn't have to deal with REQ_TYPE_ATA_PC
    special case. Simplified.

    * tp_ops->output/input_data path dropped from ide_pc_intr().

    Signed-off-by: Tejun Heo

    Tejun Heo
     
  • Impact: rq->buffer usage cleanup

    ide_raw_taskfile() directly uses rq->buffer to carry pointer to the
    data buffer. This complicates both block interface and ide backend
    request handling. Use blk_rq_map_kern() instead and drop special
    handling for REQ_TYPE_ATA_TASKFILE from ide_map_sg().

    Note that REQ_RW setting is moved upwards as blk_rq_map_kern() uses it
    to initialize bio rw flag.

    Signed-off-by: Tejun Heo
    Cc: Jens Axboe

    Tejun Heo
     

18 Apr, 2009

1 commit


08 Apr, 2009

3 commits

  • Simplify tf_read() method, making it deal only with 'struct ide_taskfile' and
    the validity flags that the upper layer passes, and factoring out the code that
    deals with the high order bytes into ide_tf_readback() to be called from the
    only two functions interested, ide_complete_cmd() and ide_dump_sector().

    This should stop the needless code duplication in this method and so make
    it about twice smaller than it was; along with simplifying the setup for
    the method call, this should save both time and space...

    Signed-off-by: Sergei Shtylyov
    Signed-off-by: Bartlomiej Zolnierkiewicz

    Sergei Shtylyov
     
  • Make 'struct ide_taskfile' cover only 8 register values and thus put two such
    fields ('tf' and 'hob') into 'struct ide_cmd', dropping unnecessary 'tf_array'
    field from it.

    This required changing the prototype of ide_get_lba_addr() and ide_tf_dump().

    Signed-off-by: Sergei Shtylyov
    [bart: fix setting of ATA_LBA bit for LBA48 commands in __ide_do_rw_disk()]
    Signed-off-by: Bartlomiej Zolnierkiewicz

    Sergei Shtylyov
     
  • Replace IDE_TFLAG_{IN|OUT}_* flags meaning to the taskfile register validity on
    input/output by the IDE_VALID_* flags and introduce 4 symmetric 8-bit register
    validity indicator subfields, 'valid.{input/output}.{tf|hob}', into the 'struct
    ide_cmd' instead of using the 'tf_flags' field for that purpose (this field can
    then be turned from 32-bit into 8-bit one).

    Signed-off-by: Sergei Shtylyov
    Signed-off-by: Bartlomiej Zolnierkiewicz

    Sergei Shtylyov