04 Jan, 2014

4 commits

  • Zefan Li requested [1] to perform the following cleanup/refactoring:

    - Split cgroupfs classid handling into net core to better express a
    possible more generic use.

    - Disable module support for cgroupfs bits as the majority of other
    cgroupfs subsystems do not have that, and seems to be not wished
    from cgroup side. Zefan probably might want to follow-up for netprio
    later on.

    - By this, code can be further reduced which previously took care of
    functionality built when compiled as module.

    cgroupfs bits are being placed under net/core/netclassid_cgroup.c, so
    that we are consistent with {netclassid,netprio}_cgroup naming that is
    under net/core/ as suggested by Zefan.

    No change in functionality, but only code refactoring that is being
    done here.

    [1] http://patchwork.ozlabs.org/patch/304825/

    Suggested-by: Li Zefan
    Signed-off-by: Daniel Borkmann
    Cc: Zefan Li
    Cc: Thomas Graf
    Cc: cgroups@vger.kernel.org
    Acked-by: Li Zefan
    Signed-off-by: Pablo Neira Ayuso

    Daniel Borkmann
     
  • The following code is not used in current upstream code.
    Some of this seems to be old hooks, other might be used by some
    out of tree module (which I don't care about breaking), and
    the need_ipv4_conntrack was used by old NAT code but no longer
    called.

    Signed-off-by: Stephen Hemminger
    Signed-off-by: Pablo Neira Ayuso

    stephen hemminger
     
  • Function never used in current upstream code.

    Signed-off-by: Stephen Hemminger
    Signed-off-by: Pablo Neira Ayuso

    stephen hemminger
     
  • We currently use prandom_u32() for allocation of ports in tcp bind(0)
    and udp code. In case of plain SNAT we try to keep the ports as is
    or increment on collision.

    SNAT --random mode does use per-destination incrementing port
    allocation. As a recent paper pointed out in [1] that this mode of
    port allocation makes it possible to an attacker to find the randomly
    allocated ports through a timing side-channel in a socket overloading
    attack conducted through an off-path attacker.

    So, NF_NAT_RANGE_PROTO_RANDOM actually weakens the port randomization
    in regard to the attack described in this paper. As we need to keep
    compatibility, add another flag called NF_NAT_RANGE_PROTO_RANDOM_FULLY
    that would replace the NF_NAT_RANGE_PROTO_RANDOM hash-based port
    selection algorithm with a simple prandom_u32() in order to mitigate
    this attack vector. Note that the lfsr113's internal state is
    periodically reseeded by the kernel through a local secure entropy
    source.

    More details can be found in [1], the basic idea is to send bursts
    of packets to a socket to overflow its receive queue and measure
    the latency to detect a possible retransmit when the port is found.
    Because of increasing ports to given destination and port, further
    allocations can be predicted. This information could then be used by
    an attacker for e.g. for cache-poisoning, NS pinning, and degradation
    of service attacks against DNS servers [1]:

    The best defense against the poisoning attacks is to properly
    deploy and validate DNSSEC; DNSSEC provides security not only
    against off-path attacker but even against MitM attacker. We hope
    that our results will help motivate administrators to adopt DNSSEC.
    However, full DNSSEC deployment make take significant time, and
    until that happens, we recommend short-term, non-cryptographic
    defenses. We recommend to support full port randomisation,
    according to practices recommended in [2], and to avoid
    per-destination sequential port allocation, which we show may be
    vulnerable to derandomisation attacks.

    Joint work between Hannes Frederic Sowa and Daniel Borkmann.

    [1] https://sites.google.com/site/hayashulman/files/NIC-derandomisation.pdf
    [2] http://arxiv.org/pdf/1205.5190v1.pdf

    Signed-off-by: Hannes Frederic Sowa
    Signed-off-by: Daniel Borkmann
    Signed-off-by: Pablo Neira Ayuso

    Daniel Borkmann
     

24 Dec, 2013

1 commit

  • With this plugin, user could specify IPComp tagged with certain
    CPI that host not interested will be DROPped or any other action.

    For example:
    iptables -A INPUT -p 108 -m ipcomp --ipcompspi 0x87 -j DROP
    ip6tables -A INPUT -p 108 -m ipcomp --ipcompspi 0x87 -j DROP

    Then input IPComp packet with CPI equates 0x87 will not reach
    upper layer anymore.

    Signed-off-by: Fan Du
    Signed-off-by: Pablo Neira Ayuso

    fan.du
     

21 Dec, 2013

1 commit

  • Thanks to commits 41063e9 (ipv4: Early TCP socket demux) and 421b388
    (udp: ipv4: Add udp early demux) it is now possible to parse UID and
    GID socket info also for incoming TCP and UDP connections. Having
    this info available, it is convenient to let NFQUEUE parse it in
    order to improve and refine the traffic analysis in userspace.

    Signed-off-by: Valentina Giusti
    Signed-off-by: Pablo Neira Ayuso

    Valentina Giusti
     

13 Dec, 2013

3 commits

  • Reorder struct netns_ct so that atomic_t "count" changes don't
    slowdown users of read mostly fields.

    This is based on Eric Dumazet's proposed patch:
    "netfilter: conntrack: remove the central spinlock"
    http://thread.gmane.org/gmane.linux.network/268758/focus=47306

    The tricky part of cache-aligning this structure, that it is getting
    inlined in struct net (include/net/net_namespace.h), thus changes to
    other netns_xxx structures affects our alignment.

    Eric's original patch contained an ambiguity on 32-bit regarding
    alignment in struct net. This patch also takes 32-bit into account,
    and in case of changed (struct net) alignment sysctl_xxx entries have
    been ordered according to how often they are accessed.

    Signed-off-by: Jesper Dangaard Brouer
    Reviewed-by: Jiri Benc
    Signed-off-by: Pablo Neira Ayuso

    Jesper Dangaard Brouer
     
  • This patch modifies the GRO stack to avoid the use of "network_header"
    and associated macros like ip_hdr() and ipv6_hdr() in order to allow
    an arbitary number of IP hdrs (v4 or v6) to be used in the
    encapsulation chain. This lays the foundation for various IP
    tunneling support (IP-in-IP, GRE, VXLAN, SIT,...) to be added later.

    With this patch, the GRO stack traversing now is mostly based on
    skb_gro_offset rather than special hdr offsets saved in skb (e.g.,
    skb->network_header). As a result all but the top layer (i.e., the
    the transport layer) must have hdrs of the same length in order for
    a pkt to be considered for aggregation. Therefore when adding a new
    encap layer (e.g., for tunneling), one must check and skip flows
    (e.g., by setting NAPI_GRO_CB(p)->same_flow to 0) that have a
    different hdr length.

    Note that unlike the network header, the transport header can and
    will continue to be set by the GRO code since there will be at
    most one "transport layer" in the encap chain.

    Signed-off-by: H.K. Jerry Chu
    Suggested-by: Eric Dumazet
    Reviewed-by: Eric Dumazet
    Signed-off-by: David S. Miller

    Jerry Chu
     
  • Since now macvlan and macvtap use the same receive and
    forward handlers, we can remove them completely and use
    netif_rx and dev_forward_skb() directly.

    Signed-off-by: Vlad Yasevich
    Signed-off-by: David S. Miller

    Vlad Yasevich
     

12 Dec, 2013

2 commits

  • RFC 4191 states in 3.5:

    When a host avoids using any non-reachable router X and instead sends
    a data packet to another router Y, and the host would have used
    router X if router X were reachable, then the host SHOULD probe each
    such router X's reachability by sending a single Neighbor
    Solicitation to that router's address. A host MUST NOT probe a
    router's reachability in the absence of useful traffic that the host
    would have sent to the router if it were reachable. In any case,
    these probes MUST be rate-limited to no more than one per minute per
    router.

    Currently, when the neighbour corresponding to a router falls into
    NUD_FAILED, it's never considered again. Introduce a new rt6_nud_state
    value, RT6_NUD_FAIL_PROBE, which suggests the route should not be used but
    should be probed with a single NS. The probe is ratelimited by the existing
    code. To better distinguish meanings of the failure values, rename
    RT6_NUD_FAIL_SOFT to RT6_NUD_FAIL_DO_RR.

    Signed-off-by: Jiri Benc
    Acked-by: Hannes Frederic Sowa
    Signed-off-by: David S. Miller

    Jiri Benc
     
  • Help of this function says: "in_dev: only on this interface, 0=any interface",
    but since commit 39a6d0630012 ("[NETNS]: Process inet_confirm_addr in the
    correct namespace."), the code supposes that it will never be NULL. This
    function is never called with in_dev == NULL, but it's exported and may be used
    by an external module.

    Because this patch restore the ability to call inet_confirm_addr() with in_dev
    == NULL, I partially revert the above commit, as suggested by Julian.

    CC: Julian Anastasov
    Signed-off-by: Nicolas Dichtel
    Reviewed-by: Julian Anastasov
    Signed-off-by: David S. Miller

    Nicolas Dichtel
     

11 Dec, 2013

4 commits

  • TIPC is currently using the field 'af_packet_priv' in struct net_device
    as a handle to find the bearer instance associated to the given network
    device. But, by doing so it is blocking other networking cleanups, such
    as the one discussed here:

    http://patchwork.ozlabs.org/patch/178044/

    This commit removes this usage from TIPC. Instead, we introduce a new
    field, 'tipc_ptr', to the net_device structure, to serve this purpose.
    When TIPC bearer is enabled, the bearer object is associated to
    'tipc_ptr'. When a TIPC packet arrives in the recv_msg() upcall
    from a networking device, the bearer object can now be obtained from
    'tipc_ptr'. When a bearer is disabled, the bearer object is detached
    from its underlying network device by setting 'tipc_ptr' to NULL.

    Additionally, an RCU lock is used to protect the new pointer.
    Henceforth, the existing tipc_net_lock is used in write mode to
    serialize write accesses to this pointer, while the new RCU lock is
    applied on the read side to ensure that the pointer is 100% valid
    within its wrapped area for all readers.

    Signed-off-by: Ying Xue
    Cc: Patrick McHardy
    Reviewed-by: Paul Gortmaker
    Signed-off-by: Jon Maloy
    Signed-off-by: David S. Miller

    Ying Xue
     
  • Signed-off-by: Jiri Pirko
    Signed-off-by: David S. Miller

    Jiri Pirko
     
  • Signed-off-by: Jiri Pirko
    Signed-off-by: David S. Miller

    Jiri Pirko
     
  • Don't needlessly recompute 'opt[opt_iter + 1]' as we already have it
    stored in 'tag_len'.

    Signed-off-by: Paul Moore
    Signed-off-by: David S. Miller

    Paul Moore
     

10 Dec, 2013

16 commits

  • And use it if possible.

    Signed-off-by: Florent Fourcot
    Reviewed-by: Hannes Frederic Sowa
    Signed-off-by: David S. Miller

    Florent Fourcot
     
  • tclass information in now already stored in rcv_flowinfo
    We do not need to store the same information twice.

    Signed-off-by: Florent Fourcot
    Reviewed-by: Hannes Frederic Sowa
    Signed-off-by: David S. Miller

    Florent Fourcot
     
  • Signed-off-by: Florent Fourcot
    Reviewed-by: Hannes Frederic Sowa
    Signed-off-by: David S. Miller

    Florent Fourcot
     
  • The current implementation of IPV6_FLOWINFO only gives a
    result if pktoptions is available (thanks to the
    ip6_datagram_recv_ctl function).
    It gives inconsistent results to user space, sometimes
    there is a result for getsockopt(IPV6_FLOWINFO), sometimes
    not.

    This patch add rcv_flowinfo to store it, and return it to
    the userspace in the same way than other pkt_options.

    Signed-off-by: Florent Fourcot
    Reviewed-by: Hannes Frederic Sowa
    Signed-off-by: David S. Miller

    Florent Fourcot
     
  • If CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS is set,
    several is__ether_addr functions can be slightly
    improved by using u32 dereferences.

    I believe all current uses of is_zero_ether_addr and
    is_broadcast_ether_addr are u16 aligned, so always use
    u16 references to improve those functions performance.

    Document the u16 alignment requirements.

    Signed-off-by: Joe Perches
    Signed-off-by: David S. Miller

    Joe Perches
     
  • Add a generic routine to test if possibly unaligned
    to u16 Ethernet addresses are equal.

    If CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS is set,
    this uses the slightly faster generic routine
    ether_addr_equal, otherwise this uses memcmp.

    Signed-off-by: Joe Perches
    Signed-off-by: David S. Miller

    Joe Perches
     
  • Make the behaviour similar to ipv4. This will allow user to set sysctl
    default neigh param values and these values will be respected even by
    devices registered before (that ones what do not have address set yet).

    Signed-off-by: Jiri Pirko
    Signed-off-by: David S. Miller

    Jiri Pirko
     
  • Previously inet devices were only constructed when addresses are added.
    Therefore the default neigh parms values they get are the ones at the
    time of these operations.

    Now that we're creating inet devices earlier, this changes the behaviour
    of default neigh parms values in an incompatible way (see bug #8519).

    This patch creates a compromise by setting the default values at the
    same point as before but only for those that have not been explicitly
    set by the user since the inet device's creation.

    Introduced by:
    commit 8030f54499925d073a88c09f30d5d844fb1b3190
    Author: Herbert Xu
    Date: Thu Feb 22 01:53:47 2007 +0900

    [IPV4] devinet: Register inetdev earlier.

    Signed-off-by: Jiri Pirko
    Signed-off-by: David S. Miller

    Jiri Pirko
     
  • Signed-off-by: Jiri Pirko
    Signed-off-by: David S. Miller

    Jiri Pirko
     
  • This will be needed later on to provide better management of default values.

    Signed-off-by: Jiri Pirko
    Signed-off-by: David S. Miller

    Jiri Pirko
     
  • This patch converts the neigh param members to an array. This allows easier
    manipulation which will be needed later on to provide better management of
    default values.

    Signed-off-by: Jiri Pirko
    Signed-off-by: David S. Miller

    Jiri Pirko
     
  • The PHY library already reads the MII_STAT1000 and MII_LPA registers in
    genphy_read_status(), so extend it to also populate the PHY device link
    partner advertised features such that we can feed this back into ethtool
    when asked for it in phy_ethtool_gset().

    Signed-off-by: Florian Fainelli
    Signed-off-by: David S. Miller

    Florian Fainelli
     
  • This patch introduces a PACKET_QDISC_BYPASS socket option, that
    allows for using a similar xmit() function as in pktgen instead
    of taking the dev_queue_xmit() path. This can be very useful when
    PF_PACKET applications are required to be used in a similar
    scenario as pktgen, but with full, flexible packet payload that
    needs to be provided, for example.

    On default, nothing changes in behaviour for normal PF_PACKET
    TX users, so everything stays as is for applications. New users,
    however, can now set PACKET_QDISC_BYPASS if needed to prevent
    own packets from i) reentering packet_rcv() and ii) to directly
    push the frame to the driver.

    In doing so we can increase pps (here 64 byte packets) for
    PF_PACKET a bit:

    # CPUs -- QDISC_BYPASS -- qdisc path -- qdisc path[**]
    1 CPU == 1,509,628 pps -- 1,208,708 -- 1,247,436
    2 CPUs == 3,198,659 pps -- 2,536,012 -- 1,605,779
    3 CPUs == 4,787,992 pps -- 3,788,740 -- 1,735,610
    4 CPUs == 6,173,956 pps -- 4,907,799 -- 1,909,114
    5 CPUs == 7,495,676 pps -- 5,956,499 -- 2,014,422
    6 CPUs == 9,001,496 pps -- 7,145,064 -- 2,155,261
    7 CPUs == 10,229,776 pps -- 8,190,596 -- 2,220,619
    8 CPUs == 11,040,732 pps -- 9,188,544 -- 2,241,879
    9 CPUs == 12,009,076 pps -- 10,275,936 -- 2,068,447
    10 CPUs == 11,380,052 pps -- 11,265,337 -- 1,578,689
    11 CPUs == 11,672,676 pps -- 11,845,344 -- 1,297,412
    [...]
    20 CPUs == 11,363,192 pps -- 11,014,933 -- 1,245,081

    [**]: qdisc path with packet_rcv(), how probably most people
    seem to use it (hopefully not anymore if not needed)

    The test was done using a modified trafgen, sending a simple
    static 64 bytes packet, on all CPUs. The trick in the fast
    "qdisc path" case, is to avoid reentering packet_rcv() by
    setting the RAW socket protocol to zero, like:
    socket(PF_PACKET, SOCK_RAW, 0);

    Tradeoffs are documented as well in this patch, clearly, if
    queues are busy, we will drop more packets, tc disciplines are
    ignored, and these packets are not visible to taps anymore. For
    a pktgen like scenario, we argue that this is acceptable.

    The pointer to the xmit function has been placed in packet
    socket structure hole between cached_dev and prot_hook that
    is hot anyway as we're working on cached_dev in each send path.

    Done in joint work together with Jesper Dangaard Brouer.

    Signed-off-by: Daniel Borkmann
    Signed-off-by: Jesper Dangaard Brouer
    Signed-off-by: David S. Miller

    Daniel Borkmann
     
  • As we need it elsewhere, move the inline helper function of
    skb_needs_linearize() over to skbuff.h include file. While
    at it, also convert the return to 'bool' instead of 'int'
    and add a proper kernel doc.

    Signed-off-by: Daniel Borkmann
    Signed-off-by: Jesper Dangaard Brouer
    Signed-off-by: David S. Miller

    Daniel Borkmann
     
  • Merge 'net' into 'net-next' to get the AF_PACKET bug fix that
    Daniel's direct transmit changes depend upon.

    Signed-off-by: David S. Miller

    David S. Miller
     
  • Commit 6da7c8fcbcbd ("qdisc: allow setting default queuing discipline")
    added the ability to change default qdisc from pfifo_fast to say fq

    But as most modern ethernet devices are multiqueue, we cant really
    see all the statistics from "tc -s qdisc show", as the default root
    qdisc is mq.

    This patch adds the calls to qdisc_list_add() to mq and mqprio

    Signed-off-by: Eric Dumazet
    Cc: Stephen Hemminger
    Signed-off-by: David S. Miller

    Eric Dumazet
     

07 Dec, 2013

9 commits

  • Add a new check for CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS to reduce
    the number of or's used in the ether_addr_equal comparison to very
    slightly improve function performance.

    Simplify the ether_addr_equal_64bits implementation.
    Integrate and remove the zap_last_2bytes helper as it's now
    used only once.

    Remove the now unused compare_ether_addr function.

    Update the unaligned-memory-access documentation to remove the
    compare_ether_addr description and show how unaligned accesses
    could occur with ether_addr_equal.

    Signed-off-by: Joe Perches
    Signed-off-by: David S. Miller

    Joe Perches
     
  • Creating an address with this flag set will result in kernel taking care
    of temporary addresses in the same way as if the address was created by
    kernel itself (after RA receive). This allows userspace applications
    implementing the autoconfiguration (NetworkManager for example) to
    implement ipv6 addresses privacy.

    Signed-off-by: Jiri Pirko
    Signed-off-by: Thomas Haller
    Signed-off-by: David S. Miller

    Jiri Pirko
     
  • There is no more space in u8 ifa_flags. So do what davem suffested and
    add another netlink attr called IFA_FLAGS for carry more flags.

    Signed-off-by: Jiri Pirko
    Signed-off-by: Thomas Haller
    Signed-off-by: David S. Miller

    Jiri Pirko
     
  • Some network drivers use dev_kfree_skb_any() and dev_kfree_skb_irq()
    helpers to free skbs, both for dropped packets and TX completed ones.

    We need to separate the two causes to get better diagnostics
    given by dropwatch or "perf record -e skb:kfree_skb"

    This patch provides two new helpers, dev_consume_skb_any() and
    dev_consume_skb_irq() to be used for consumed skbs.

    __dev_kfree_skb_irq() is slightly optimized to remove one
    atomic_dec_and_test() in fast path, and use this_cpu_{r|w} accessors.

    Signed-off-by: Eric Dumazet
    Signed-off-by: David S. Miller

    Eric Dumazet
     
  • Breakdown the PHY_*_FEATURES into per speed defines such that we can
    easily re-use them individually.

    Signed-off-by: Florian Fainelli
    Signed-off-by: David S. Miller

    Florian Fainelli
     
  • John W. Linville says:

    ====================
    Please pull this batch of updates intended for the 3.14 stream...

    For the mac80211 bits, Johannes says:

    "I have various improvements/cleanups/fixes all over, but the shortlog
    shows that Luis's regulatory work and mesh work from the cozybit folks
    are the biggest ones, along with the CSA fixes."

    Along with that, we have big batches of updates to brcmfmac, rtlwifi,
    and ath9k. There are updates to wcn36xx, rt2x00, and a handful of
    others as well.
    ====================

    Signed-off-by: David S. Miller

    David S. Miller
     
  • With the introduction of TCP Small Queues, TSO auto sizing, and TCP
    pacing, we can implement Automatic Corking in the kernel, to help
    applications doing small write()/sendmsg() to TCP sockets.

    Idea is to change tcp_push() to check if the current skb payload is
    under skb optimal size (a multiple of MSS bytes)

    If under 'size_goal', and at least one packet is still in Qdisc or
    NIC TX queues, set the TCP Small Queue Throttled bit, so that the push
    will be delayed up to TX completion time.

    This delay might allow the application to coalesce more bytes
    in the skb in following write()/sendmsg()/sendfile() system calls.

    The exact duration of the delay is depending on the dynamics
    of the system, and might be zero if no packet for this flow
    is actually held in Qdisc or NIC TX ring.

    Using FQ/pacing is a way to increase the probability of
    autocorking being triggered.

    Add a new sysctl (/proc/sys/net/ipv4/tcp_autocorking) to control
    this feature and default it to 1 (enabled)

    Add a new SNMP counter : nstat -a | grep TcpExtTCPAutoCorking
    This counter is incremented every time we detected skb was under used
    and its flush was deferred.

    Tested:

    Interesting effects when using line buffered commands under ssh.

    Excellent performance results in term of cpu usage and total throughput.

    lpq83:~# echo 1 >/proc/sys/net/ipv4/tcp_autocorking
    lpq83:~# perf stat ./super_netperf 4 -t TCP_STREAM -H lpq84 -- -m 128
    9410.39

    Performance counter stats for './super_netperf 4 -t TCP_STREAM -H lpq84 -- -m 128':

    35209.439626 task-clock # 2.901 CPUs utilized
    2,294 context-switches # 0.065 K/sec
    101 CPU-migrations # 0.003 K/sec
    4,079 page-faults # 0.116 K/sec
    97,923,241,298 cycles # 2.781 GHz [83.31%]
    51,832,908,236 stalled-cycles-frontend # 52.93% frontend cycles idle [83.30%]
    25,697,986,603 stalled-cycles-backend # 26.24% backend cycles idle [66.70%]
    102,225,978,536 instructions # 1.04 insns per cycle
    # 0.51 stalled cycles per insn [83.38%]
    18,657,696,819 branches # 529.906 M/sec [83.29%]
    91,679,646 branch-misses # 0.49% of all branches [83.40%]

    12.136204899 seconds time elapsed

    lpq83:~# echo 0 >/proc/sys/net/ipv4/tcp_autocorking
    lpq83:~# perf stat ./super_netperf 4 -t TCP_STREAM -H lpq84 -- -m 128
    6624.89

    Performance counter stats for './super_netperf 4 -t TCP_STREAM -H lpq84 -- -m 128':
    40045.864494 task-clock # 3.301 CPUs utilized
    171 context-switches # 0.004 K/sec
    53 CPU-migrations # 0.001 K/sec
    4,080 page-faults # 0.102 K/sec
    111,340,458,645 cycles # 2.780 GHz [83.34%]
    61,778,039,277 stalled-cycles-frontend # 55.49% frontend cycles idle [83.31%]
    29,295,522,759 stalled-cycles-backend # 26.31% backend cycles idle [66.67%]
    108,654,349,355 instructions # 0.98 insns per cycle
    # 0.57 stalled cycles per insn [83.34%]
    19,552,170,748 branches # 488.244 M/sec [83.34%]
    157,875,417 branch-misses # 0.81% of all branches [83.34%]

    12.130267788 seconds time elapsed

    Signed-off-by: Eric Dumazet
    Signed-off-by: David S. Miller

    Eric Dumazet
     
  • Several files refer to an old address for the Free Software Foundation
    in the file header comment. Resolve by replacing the address with
    the URL so that we do not have to keep
    updating the header comments anytime the address changes.

    CC: netfilter@vger.kernel.org
    CC: Pablo Neira Ayuso
    CC: Patrick McHardy
    CC: Jozsef Kadlecsik
    Signed-off-by: Jeff Kirsher
    Signed-off-by: David S. Miller

    Jeff Kirsher
     
  • Several files refer to an old address for the Free Software Foundation
    in the file header comment. Resolve by replacing the address with
    the URL so that we do not have to keep
    updating the header comments anytime the address changes.

    Signed-off-by: Jeff Kirsher
    Signed-off-by: David S. Miller

    Jeff Kirsher