27 Jul, 2011

1 commit

  • [ This patch has already been accepted as commit 0ac0c0d0f837 but later
    reverted (commit 35926ff5fba8) because it itroduced arch specific
    __node_random which was defined only for x86 code so it broke other
    archs. This is a followup without any arch specific code. Other than
    that there are no functional changes.]

    Some workloads that create a large number of small files tend to assign
    too many pages to node 0 (multi-node systems). Part of the reason is
    that the rotor (in cpuset_mem_spread_node()) used to assign nodes starts
    at node 0 for newly created tasks.

    This patch changes the rotor to be initialized to a random node number
    of the cpuset.

    [akpm@linux-foundation.org: fix layout]
    [Lee.Schermerhorn@hp.com: Define stub numa_random() for !NUMA configuration]
    [mhocko@suse.cz: Make it arch independent]
    [akpm@linux-foundation.org: fix CONFIG_NUMA=y, MAX_NUMNODES>1 build]
    Signed-off-by: Jack Steiner
    Signed-off-by: Lee Schermerhorn
    Signed-off-by: Michal Hocko
    Reviewed-by: KOSAKI Motohiro
    Cc: Christoph Lameter
    Cc: Pekka Enberg
    Cc: Paul Menage
    Cc: Jack Steiner
    Cc: Robin Holt
    Cc: David Rientjes
    Cc: Christoph Lameter
    Cc: David Rientjes
    Cc: Jack Steiner
    Cc: KOSAKI Motohiro
    Cc: Lee Schermerhorn
    Cc: Michal Hocko
    Cc: Paul Menage
    Cc: Pekka Enberg
    Cc: Robin Holt
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Michal Hocko
     

31 May, 2010

1 commit

  • This reverts commit 0ac0c0d0f837c499afd02a802f9cf52d3027fa3b, which
    caused cross-architecture build problems for all the wrong reasons.
    IA64 already added its own version of __node_random(), but the fact is,
    there is nothing architectural about the function, and the original
    commit was just badly done. Revert it, since no fix is forthcoming.

    Requested-by: Stephen Rothwell
    Signed-off-by: Linus Torvalds

    Linus Torvalds
     

28 May, 2010

1 commit

  • Some workloads that create a large number of small files tend to assign
    too many pages to node 0 (multi-node systems). Part of the reason is that
    the rotor (in cpuset_mem_spread_node()) used to assign nodes starts at
    node 0 for newly created tasks.

    This patch changes the rotor to be initialized to a random node number of
    the cpuset.

    [akpm@linux-foundation.org: fix layout]
    [Lee.Schermerhorn@hp.com: Define stub numa_random() for !NUMA configuration]
    Signed-off-by: Jack Steiner
    Signed-off-by: Lee Schermerhorn
    Cc: Christoph Lameter
    Cc: Pekka Enberg
    Cc: Paul Menage
    Cc: Jack Steiner
    Cc: Robin Holt
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Jack Steiner
     

13 Mar, 2010

1 commit


07 Mar, 2010

1 commit

  • The macro any_online_node() is prone to producing sparse warnings due to
    the local symbol 'node'. Since all the in-tree users are really
    requesting the first online node (the mask argument is either
    NODE_MASK_ALL or node_online_map) just use the first_online_node macro and
    remove the any_online_node macro since there are no users.

    Signed-off-by: H Hartley Sweeten
    Acked-by: David Rientjes
    Reviewed-by: KAMEZAWA Hiroyuki
    Cc: Mel Gorman
    Cc: Lee Schermerhorn
    Acked-by: Benjamin Herrenschmidt
    Cc: Paul Mackerras
    Cc: Dave Hansen
    Cc: Milton Miller
    Cc: Nathan Fontenot
    Cc: Geoff Levand
    Cc: Grant Likely
    Cc: J. Bruce Fields
    Cc: Neil Brown
    Cc: Trond Myklebust
    Cc: David S. Miller
    Cc: Benny Halevy
    Cc: Chuck Lever
    Cc: Ricardo Labiaga
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    H Hartley Sweeten
     

16 Dec, 2009

3 commits

  • Objects passed to NODEMASK_ALLOC() are relatively small in size and are
    backed by slab caches that are not of large order, traditionally never
    greater than PAGE_ALLOC_COSTLY_ORDER.

    Thus, using GFP_KERNEL for these allocations on large machines when
    CONFIG_NODES_SHIFT > 8 will cause the page allocator to loop endlessly in
    the allocation attempt, each time invoking both direct reclaim or the oom
    killer.

    This is of particular interest when using NODEMASK_ALLOC() from a
    mempolicy context (either directly in mm/mempolicy.c or the mempolicy
    constrained hugetlb allocations) since the oom killer always kills current
    when allocations are constrained by mempolicies. So for all present use
    cases in the kernel, current would end up being oom killed when direct
    reclaim fails. That would allow the NODEMASK_ALLOC() to succeed but
    current would have sacrificed itself upon returning.

    This patch adds gfp flags to NODEMASK_ALLOC() to pass to kmalloc() on
    CONFIG_NODES_SHIFT > 8; this parameter is a nop on other configurations.
    All current use cases either directly from hugetlb code or indirectly via
    NODEMASK_SCRATCH() union __GFP_NORETRY to avoid direct reclaim and the oom
    killer when the slab allocator needs to allocate additional pages.

    The side-effect of this change is that all current use cases of either
    NODEMASK_ALLOC() or NODEMASK_SCRATCH() need appropriate -ENOMEM handling
    when the allocation fails (never for CONFIG_NODES_SHIFT
    Acked-by: KAMEZAWA Hiroyuki
    Cc: Lee Schermerhorn
    Cc: Mel Gorman
    Cc: Randy Dunlap
    Cc: Nishanth Aravamudan
    Cc: Andi Kleen
    Cc: David Rientjes
    Cc: Adam Litke
    Cc: Andy Whitcroft
    Cc: Eric Whitney
    Cc: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     
  • Factor init_nodemask_of_node() out of the nodemask_of_node() macro.

    This will be used to populate the huge pages "nodes_allowed" nodemask for
    a single node when basing nodes_allowed on a preferred/local mempolicy or
    when a persistent huge page pool page count is modified via a per node
    sysfs attribute.

    Signed-off-by: Lee Schermerhorn
    Acked-by: Mel Gorman
    Reviewed-by: Andi Kleen
    Cc: KAMEZAWA Hiroyuki
    Cc: Randy Dunlap
    Cc: Nishanth Aravamudan
    Acked-by: David Rientjes
    Cc: Adam Litke
    Cc: Andy Whitcroft
    Cc: Eric Whitney
    Cc: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Lee Schermerhorn
     
  • This is a series of patches to provide control over the location of the
    allocation and freeing of persistent huge pages on a NUMA platform.
    Please consider for merging into mmotm.

    This series uses two mechanisms to constrain the nodes from which
    persistent huge pages are allocated: 1) the task NUMA mempolicy of the
    task modifying a new sysctl "nr_hugepages_mempolicy", based on a
    suggestion by Mel Gorman; and 2) a subset of the hugepages hstate sysfs
    attributes have been added [in V4] to each node system device under:

    /sys/devices/node/node[0-9]*/hugepages

    The per node attibutes allow direct assignment of a huge page count on a
    specific node, regardless of the task's mempolicy or cpuset constraints.

    This patch:

    NODEMASK_ALLOC(x, m) assumes x is a type of struct, which is unnecessary.
    It's perfectly reasonable to use this macro to allocate a nodemask_t,
    which is anonymous, either dynamically or on the stack depending on
    NODES_SHIFT.

    Signed-off-by: David Rientjes
    Signed-off-by: Lee Schermerhorn
    Acked-by: KAMEZAWA Hiroyuki
    Cc: Mel Gorman
    Cc: Randy Dunlap
    Cc: Nishanth Aravamudan
    Cc: Andi Kleen
    Cc: David Rientjes
    Cc: Adam Litke
    Cc: Andy Whitcroft
    Cc: Eric Whitney
    Cc: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     

08 Aug, 2009

1 commit

  • At first, init_task's mems_allowed is initialized as this.
    init_task->mems_allowed == node_state[N_POSSIBLE]

    And cpuset's top_cpuset mask is initialized as this
    top_cpuset->mems_allowed = node_state[N_HIGH_MEMORY]

    Before 2.6.29:
    policy's mems_allowed is initialized as this.

    1. update tasks->mems_allowed by its cpuset->mems_allowed.
    2. policy->mems_allowed = nodes_and(tasks->mems_allowed, user's mask)

    Updating task's mems_allowed in reference to top_cpuset's one.
    cpuset's mems_allowed is aware of N_HIGH_MEMORY, always.

    In 2.6.30: After commit 58568d2a8215cb6f55caf2332017d7bdff954e1c
    ("cpuset,mm: update tasks' mems_allowed in time"), policy's mems_allowed
    is initialized as this.

    1. policy->mems_allowd = nodes_and(task->mems_allowed, user's mask)

    Here, if task is in top_cpuset, task->mems_allowed is not updated from
    init's one. Assume user excutes command as #numactrl --interleave=all
    ,....

    policy->mems_allowd = nodes_and(N_POSSIBLE, ALL_SET_MASK)

    Then, policy's mems_allowd can includes a possible node, which has no pgdat.

    MPOL's INTERLEAVE just scans nodemask of task->mems_allowd and access this
    directly.

    NODE_DATA(nid)->zonelist even if NODE_DATA(nid)==NULL

    Then, what's we need is making policy->mems_allowed be aware of
    N_HIGH_MEMORY. This patch does that. But to do so, extra nodemask will
    be on statck. Because I know cpumask has a new interface of
    CPUMASK_ALLOC(), I added it to node.

    This patch stands on old behavior. But I feel this fix itself is just a
    Band-Aid. But to do fundametal fix, we have to take care of memory
    hotplug and it takes time. (task->mems_allowd should be N_HIGH_MEMORY, I
    think.)

    mpol_set_nodemask() should be aware of N_HIGH_MEMORY and policy's nodemask
    should be includes only online nodes.

    In old behavior, this is guaranteed by frequent reference to cpuset's
    code. Now, most of them are removed and mempolicy has to check it by
    itself.

    To do check, a few nodemask_t will be used for calculating nodemask. But,
    size of nodemask_t can be big and it's not good to allocate them on stack.

    Now, cpumask_t has CPUMASK_ALLOC/FREE an easy code for get scratch area.
    NODEMASK_ALLOC/FREE shoudl be there.

    [akpm@linux-foundation.org: cleanups & tweaks]
    Tested-by: KOSAKI Motohiro
    Signed-off-by: KAMEZAWA Hiroyuki
    Cc: Miao Xie
    Cc: Ingo Molnar
    Cc: Peter Zijlstra
    Cc: Christoph Lameter
    Cc: Paul Menage
    Cc: Nick Piggin
    Cc: Yasunori Goto
    Cc: Pekka Enberg
    Cc: David Rientjes
    Cc: Lee Schermerhorn
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    KAMEZAWA Hiroyuki
     

17 Jun, 2009

1 commit

  • num_online_nodes() is called in a number of places but most often by the
    page allocator when deciding whether the zonelist needs to be filtered
    based on cpusets or the zonelist cache. This is actually a heavy function
    and touches a number of cache lines.

    This patch stores the number of online nodes at boot time and updates the
    value when nodes get onlined and offlined. The value is then used in a
    number of important paths in place of num_online_nodes().

    [rientjes@google.com: do not override definition of node_set_online() with macro]
    Signed-off-by: Christoph Lameter
    Signed-off-by: Mel Gorman
    Cc: KOSAKI Motohiro
    Cc: Pekka Enberg
    Cc: Peter Zijlstra
    Cc: Nick Piggin
    Cc: Dave Hansen
    Cc: Lee Schermerhorn
    Signed-off-by: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     

28 Apr, 2008

1 commit

  • The following adds two more bitmap operators, bitmap_onto() and bitmap_fold(),
    with the usual cpumask and nodemask wrappers.

    The bitmap_onto() operator computes one bitmap relative to another. If the
    n-th bit in the origin mask is set, then the m-th bit of the destination mask
    will be set, where m is the position of the n-th set bit in the relative mask.

    The bitmap_fold() operator folds a bitmap into a second that has bit m set iff
    the input bitmap has some bit n set, where m == n mod sz, for the specified sz
    value.

    There are two substantive changes between this patch and its
    predecessor bitmap_relative:
    1) Renamed bitmap_relative() to be bitmap_onto().
    2) Added bitmap_fold().

    The essential motivation for bitmap_onto() is to provide a mechanism for
    converting a cpuset-relative CPU or Node mask to an absolute mask. Cpuset
    relative masks are written as if the current task were in a cpuset whose CPUs
    or Nodes were just the consecutive ones numbered 0..N-1, for some N. The
    bitmap_onto() operator is provided in anticipation of adding support for the
    first such cpuset relative mask, by the mbind() and set_mempolicy() system
    calls, using a planned flag of MPOL_F_RELATIVE_NODES. These bitmap operators
    (and their nodemask wrappers, in particular) will be used in code that
    converts the user specified cpuset relative memory policy to a specific system
    node numbered policy, given the current mems_allowed of the tasks cpuset.

    Such cpuset relative mempolicies will address two deficiencies
    of the existing interface between cpusets and mempolicies:
    1) A task cannot at present reliably establish a cpuset
    relative mempolicy because there is an essential race
    condition, in that the tasks cpuset may be changed in
    between the time the task can query its cpuset placement,
    and the time the task can issue the applicable mbind or
    set_memplicy system call.
    2) A task cannot at present establish what cpuset relative
    mempolicy it would like to have, if it is in a smaller
    cpuset than it might have mempolicy preferences for,
    because the existing interface only allows specifying
    mempolicies for nodes currently allowed by the cpuset.

    Cpuset relative mempolicies are useful for tasks that don't distinguish
    particularly between one CPU or Node and another, but only between how many of
    each are allowed, and the proper placement of threads and memory pages on the
    various CPUs and Nodes available.

    The motivation for the added bitmap_fold() can be seen in the following
    example.

    Let's say an application has specified some mempolicies that presume 16 memory
    nodes, including say a mempolicy that specified MPOL_F_RELATIVE_NODES (cpuset
    relative) nodes 12-15. Then lets say that application is crammed into a
    cpuset that only has 8 memory nodes, 0-7. If one just uses bitmap_onto(),
    this mempolicy, mapped to that cpuset, would ignore the requested relative
    nodes above 7, leaving it empty of nodes. That's not good; better to fold the
    higher nodes down, so that some nodes are included in the resulting mapped
    mempolicy. In this case, the mempolicy nodes 12-15 are taken modulo 8 (the
    weight of the mems_allowed of the confining cpuset), resulting in a mempolicy
    specifying nodes 4-7.

    Signed-off-by: Paul Jackson
    Signed-off-by: David Rientjes
    Cc: Christoph Lameter
    Cc: Andi Kleen
    Cc: Mel Gorman
    Cc: Lee Schermerhorn
    Cc:
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Paul Jackson
     

17 Oct, 2007

3 commits

  • We need the check for a node with cpu in zone reclaim. Zone reclaim will not
    allow remote zone reclaim if a node has a cpu.

    [Lee.Schermerhorn@hp.com: Move setup of N_CPU node state mask]
    Signed-off-by: Christoph Lameter
    Tested-by: Lee Schermerhorn
    Acked-by: Bob Picco
    Cc: Nishanth Aravamudan
    Cc: KAMEZAWA Hiroyuki
    Cc: Mel Gorman
    Signed-off-by: Lee Schermerhorn
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • It is necessary to know if nodes have memory since we have recently begun to
    add support for memoryless nodes. For that purpose we introduce a two new
    node states: N_HIGH_MEMORY and N_NORMAL_MEMORY.

    A node has its bit in N_HIGH_MEMORY set if it has any memory regardless of the
    type of mmemory. If a node has memory then it has at least one zone defined
    in its pgdat structure that is located in the pgdat itself.

    A node has its bit in N_NORMAL_MEMORY set if it has a lower zone than
    ZONE_HIGHMEM. This means it is possible to allocate memory that is not
    subject to kmap.

    N_HIGH_MEMORY and N_NORMAL_MEMORY can then be used in various places to insure
    that we do the right thing when we encounter a memoryless node.

    [akpm@linux-foundation.org: build fix]
    [Lee.Schermerhorn@hp.com: update N_HIGH_MEMORY node state for memory hotadd]
    [y-goto@jp.fujitsu.com: Fix memory hotplug + sparsemem build]
    Signed-off-by: Lee Schermerhorn
    Signed-off-by: Nishanth Aravamudan
    Signed-off-by: Christoph Lameter
    Acked-by: Bob Picco
    Cc: KAMEZAWA Hiroyuki
    Cc: Mel Gorman
    Signed-off-by: Yasunori Goto
    Signed-off-by: Paul Mundt
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • Why do we need to support memoryless nodes?

    KAMEZAWA Hiroyuki wrote:

    > For fujitsu, problem is called "empty" node.
    >
    > When ACPI's SRAT table includes "possible nodes", ia64 bootstrap(acpi_numa_init)
    > creates nodes, which includes no memory, no cpu.
    >
    > I tried to remove empty-node in past, but that was denied.
    > It was because we can hot-add cpu to the empty node.
    > (node-hotplug triggered by cpu is not implemented now. and it will be ugly.)
    >
    >
    > For HP, (Lee can comment on this later), they have memory-less-node.
    > As far as I hear, HP's machine can have following configration.
    >
    > (example)
    > Node0: CPU0 memory AAA MB
    > Node1: CPU1 memory AAA MB
    > Node2: CPU2 memory AAA MB
    > Node3: CPU3 memory AAA MB
    > Node4: Memory XXX GB
    >
    > AAA is very small value (below 16MB) and will be omitted by ia64 bootstrap.
    > After boot, only Node 4 has valid memory (but have no cpu.)
    >
    > Maybe this is memory-interleave by firmware config.

    Christoph Lameter wrote:

    > Future SGI platforms (actually also current one can have but nothing like
    > that is deployed to my knowledge) have nodes with only cpus. Current SGI
    > platforms have nodes with just I/O that we so far cannot manage in the
    > core. So the arch code maps them to the nearest memory node.

    Lee Schermerhorn wrote:

    > For the HP platforms, we can configure each cell with from 0% to 100%
    > "cell local memory". When we configure with improve bandwidth at the expense of latency for numa-challenged
    > applications [and OSes, but not our problem ;-)]. When we boot Linux on
    > such a config, all of the real nodes have no memory--it all resides in a
    > single interleaved pseudo-node.
    >
    > When we boot Linux on a 100% CLM configuration [== NUMA], we still have
    > the interleaved pseudo-node. It contains a few hundred MB stolen from
    > the real nodes to contain the DMA zone. [Interleaved memory resides at
    > phys addr 0]. The memoryless-nodes patches, along with the zoneorder
    > patches, support this config as well.
    >
    > Also, when we boot a NUMA config with the "mem=" command line,
    > specifying less memory than actually exists, Linux takes the excluded
    > memory "off the top" rather than distributing it across the nodes. This
    > can result in memoryless nodes, as well.
    >

    This patch:

    Preparation for memoryless node patches.

    Provide a generic way to keep nodemasks describing various characteristics of
    NUMA nodes.

    Remove the node_online_map and the node_possible map and realize the same
    functionality using two nodes stats: N_POSSIBLE and N_ONLINE.

    [Lee.Schermerhorn@hp.com: Initialize N_*_MEMORY and N_CPU masks for non-NUMA config]
    Signed-off-by: Christoph Lameter
    Tested-by: Lee Schermerhorn
    Acked-by: Lee Schermerhorn
    Acked-by: Bob Picco
    Cc: Nishanth Aravamudan
    Cc: KAMEZAWA Hiroyuki
    Cc: Mel Gorman
    Signed-off-by: Lee Schermerhorn
    Cc: "Serge E. Hallyn"
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     

21 Feb, 2007

1 commit

  • highest_possible_node_id() is currently used to calculate the last possible
    node idso that the network subsystem can figure out how to size per node
    arrays.

    I think having the ability to determine the maximum amount of nodes in a
    system at runtime is useful but then we should name this entry
    correspondingly, it should return the number of node_ids, and the the value
    needs to be setup only once on bootup. The node_possible_map does not
    change after bootup.

    This patch introduces nr_node_ids and replaces the use of
    highest_possible_node_id(). nr_node_ids is calculated on bootup when the
    page allocators pagesets are initialized.

    [deweerdt@free.fr: fix oops]
    Signed-off-by: Christoph Lameter
    Cc: Neil Brown
    Cc: Trond Myklebust
    Signed-off-by: Frederik Deweerdt
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     

12 Oct, 2006

1 commit

  • lib/bitmap.c:bitmap_parse() is a library function that received as input a
    user buffer. This seemed to have originated from the way the write_proc
    function of the /proc filesystem operates.

    This has been reworked to not use kmalloc and eliminates a lot of
    get_user() overhead by performing one access_ok before using __get_user().

    We need to test if we are in kernel or user space (is_user) and access the
    buffer differently. We cannot use __get_user() to access kernel addresses
    in all cases, for example in architectures with separate address space for
    kernel and user.

    This function will be useful for other uses as well; for example, taking
    input for /sysfs instead of /proc, so it was changed to accept kernel
    buffers. We have this use for the Linux UWB project, as part as the
    upcoming bandwidth allocator code.

    Only a few routines used this function and they were changed too.

    Signed-off-by: Reinette Chatre
    Signed-off-by: Inaky Perez-Gonzalez
    Cc: Paul Jackson
    Cc: Joe Korty
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Reinette Chatre
     

02 Oct, 2006

1 commit


28 Mar, 2006

1 commit

  • This patch defines for_each_online_pgdat() as a replacement of
    for_each_pgdat()

    Now, online nodes are managed by node_online_map. But for_each_pgdat()
    uses pgdat_link to iterate over all nodes(pgdat). This means management
    structure for online pgdat is duplicated.

    I think using node_online_map for for_each_pgdat() is simple and sane
    rather ather than pgdat_link. New macro is named as
    for_each_online_pgdat(). Following patch will fix callers of
    for_each_pgdat().

    The bootmem allocater uses for_each_pgdat() before pgdat initialization. I
    don't think it's sane. Following patch will fix it.

    Signed-off-by: Yasunori Goto
    Signed-off-by: KAMEZAWA Hiroyuki
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    KAMEZAWA Hiroyuki
     

08 Feb, 2006

1 commit


31 Oct, 2005

1 commit

  • In the forthcoming task migration support, a key calculation will be
    mapping cpu and node numbers from the old set to the new set while
    preserving cpuset-relative offset.

    For example, if a task and its pages on nodes 8-11 are being migrated to
    nodes 24-27, then pages on node 9 (the 2nd node in the old set) should be
    moved to node 25 (the 2nd node in the new set.)

    As with other bitmap operations, the proper way to code this is to provide
    the underlying calculation in lib/bitmap.c, and then to provide the usual
    cpumask and nodemask wrappers.

    This patch provides that. These operations are termed 'remap' operations.
    Both remapping a single bit and a set of bits is supported.

    Signed-off-by: Paul Jackson
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Paul Jackson
     

17 Apr, 2005

1 commit

  • Initial git repository build. I'm not bothering with the full history,
    even though we have it. We can create a separate "historical" git
    archive of that later if we want to, and in the meantime it's about
    3.2GB when imported into git - space that would just make the early
    git days unnecessarily complicated, when we don't have a lot of good
    infrastructure for it.

    Let it rip!

    Linus Torvalds