02 Oct, 2006

6 commits

  • This patch adds basic IPC namespace functionality to
    IPC utils:
    - init_ipc_ns
    - copy/clone/unshare/free IPC ns
    - /proc preparations

    Signed-off-by: Pavel Emelianov
    Signed-off-by: Kirill Korotaev
    Cc: "Eric W. Biederman"
    Cc: Cedric Le Goater
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Kirill Korotaev
     
  • This patch set allows to unshare IPCs and have a private set of IPC objects
    (sem, shm, msg) inside namespace. Basically, it is another building block of
    containers functionality.

    This patch implements core IPC namespace changes:
    - ipc_namespace structure
    - new config option CONFIG_IPC_NS
    - adds CLONE_NEWIPC flag
    - unshare support

    [clg@fr.ibm.com: small fix for unshare of ipc namespace]
    [akpm@osdl.org: build fix]
    Signed-off-by: Pavel Emelianov
    Signed-off-by: Kirill Korotaev
    Signed-off-by: Cedric Le Goater
    Cc: "Eric W. Biederman"
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Kirill Korotaev
     
  • The nsproxy was being copied in unshare() when anything was being unshared,
    even if it was something not referenced from nsproxy. This should end up
    in some cases with far more memory usage than necessary.

    Signed-off-by: Serge Hallyn
    Cc: Kirill Korotaev
    Cc: "Eric W. Biederman"
    Cc: Herbert Poetzl
    Cc: Andrey Savochkin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Serge Hallyn
     
  • Implement a CLONE_NEWUTS flag, and use it at clone and sys_unshare.

    [clg@fr.ibm.com: IPC unshare fix]
    [bunk@stusta.de: cleanup]
    Signed-off-by: Serge Hallyn
    Cc: Kirill Korotaev
    Cc: "Eric W. Biederman"
    Cc: Herbert Poetzl
    Cc: Andrey Savochkin
    Signed-off-by: Adrian Bunk
    Signed-off-by: Cedric Le Goater
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Serge E. Hallyn
     
  • This moves the mount namespace into the nsproxy. The mount namespace count
    now refers to the number of nsproxies point to it, rather than the number of
    tasks. As a result, the unshare_namespace() function in kernel/fork.c no
    longer checks whether it is being shared.

    Signed-off-by: Serge Hallyn
    Cc: Kirill Korotaev
    Cc: "Eric W. Biederman"
    Cc: Herbert Poetzl
    Cc: Andrey Savochkin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Serge E. Hallyn
     
  • This patch adds a nsproxy structure to the task struct. Later patches will
    move the fs namespace pointer into this structure, and introduce a new utsname
    namespace into the nsproxy.

    The vserver and openvz functionality, then, would be implemented in large part
    by virtualizing/isolating more and more resources into namespaces, each
    contained in the nsproxy.

    [akpm@osdl.org: build fix]
    Signed-off-by: Serge Hallyn
    Cc: Kirill Korotaev
    Cc: "Eric W. Biederman"
    Cc: Herbert Poetzl
    Cc: Andrey Savochkin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Serge E. Hallyn
     

01 Oct, 2006

1 commit

  • There were a few accounting data/macros that are used in CSA but are #ifdef'ed
    inside CONFIG_BSD_PROCESS_ACCT. This patch is to change those ifdef's from
    CONFIG_BSD_PROCESS_ACCT to CONFIG_TASK_XACCT. A few defines are moved from
    kernel/acct.c and include/linux/acct.h to kernel/tsacct.c and
    include/linux/tsacct_kern.h.

    Signed-off-by: Jay Lan
    Cc: Shailabh Nagar
    Cc: Balbir Singh
    Cc: Jes Sorensen
    Cc: Chris Sturtivant
    Cc: Tony Ernst
    Cc: Guillaume Thouvenin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Jay Lan
     

30 Sep, 2006

2 commits

  • copy_process:
    // holds tasklist_lock + ->siglock
    /*
    * inherit ioprio
    */
    p->ioprio = current->ioprio;

    Why? ->ioprio was already copied in dup_task_struct(). I guess this is
    needed to ensure that the child can't escape
    sys_ioprio_set(IOPRIO_WHO_{PGRP,USER}), yes?

    In that case we don't need ->siglock held, and the comment should be
    updated.

    Signed-off-by: Oleg Nesterov
    Cc: Jens Axboe
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Oleg Nesterov
     
  • Signed-off-by: Alexey Dobriyan
    Acked-by: Jens Axboe
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Alexey Dobriyan
     

29 Sep, 2006

1 commit

  • * 'devel' of master.kernel.org:/home/rmk/linux-2.6-arm: (130 commits)
    [ARM] 3856/1: Add clocksource for Intel IXP4xx platforms
    [ARM] 3855/1: Add generic time support
    [ARM] 3873/1: S3C24XX: Add irq_chip names
    [ARM] 3872/1: S3C24XX: Apply consistant tabbing to irq_chips
    [ARM] 3871/1: S3C24XX: Fix ordering of EINT4..23
    [ARM] nommu: confirms the CR_V bit in nommu mode
    [ARM] nommu: abort handler fixup for !CPU_CP15_MMU cores.
    [ARM] 3870/1: AT91: Start removing static memory mappings
    [ARM] 3869/1: AT91: NAND support for DK and KB9202 boards
    [ARM] 3868/1: AT91 hardware header update
    [ARM] 3867/1: AT91 GPIO update
    [ARM] 3866/1: AT91 clock update
    [ARM] 3865/1: AT91RM9200 header updates
    [ARM] 3862/2: S3C2410 - add basic power management support for AML M5900 series
    [ARM] kthread: switch arch/arm/kernel/apm.c
    [ARM] Off-by-one in arch/arm/common/icst*
    [ARM] 3864/1: Refactore sharpsl_pm
    [ARM] 3863/1: Add Locomo SPI Device
    [ARM] 3847/2: Convert LOMOMO to use struct device for GPIOs
    [ARM] Use CPU_CACHE_* where possible in asm/cacheflush.h
    ...

    Linus Torvalds
     

26 Sep, 2006

1 commit


20 Sep, 2006

1 commit

  • ARM has interrupts enabled over context switches (iow, has
    __ARCH_WANT_INTERRUPTS_ON_CTXSW defined.) The lockdep code in fork.c
    assumes that interrupts are always disabled. Fix this wrong
    assumption by making the initialisation of 'p->hardirqs_enabled'
    depend on __ARCH_WANT_INTERRUPTS_ON_CTXSW.

    Acked-by: Ingo Molnar
    Signed-off-by: Russell King

    Russell King
     

02 Sep, 2006

1 commit

  • Cleanup allocation and freeing of tsk->delays used by delay accounting.
    This solves two problems reported for delay accounting:

    1. oops in __delayacct_blkio_ticks
    http://www.uwsg.indiana.edu/hypermail/linux/kernel/0608.2/1844.html

    Currently tsk->delays is getting freed too early in task exit which can
    cause a NULL tsk->delays to get accessed via reading of /proc//stats.
    The patch fixes this problem by freeing tsk->delays closer to when
    task_struct itself is freed up. As a result, it also eliminates the use of
    tsk->delays_lock which was only being used (inadequately) to safeguard
    access to tsk->delays while a task was exiting.

    2. Possible memory leak in kernel/delayacct.c
    http://www.uwsg.indiana.edu/hypermail/linux/kernel/0608.2/1389.html

    The patch cleans up tsk->delays allocations after a bad fork which was
    missing earlier.

    The patch has been tested to fix the problems listed above and stress
    tested with rapid calls to delay accounting's taskstats command interface
    (which is the other path that can access the same data, besides the /proc
    interface causing the oops above).

    Signed-off-by: Shailabh Nagar
    Cc: Balbir Singh
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Shailabh Nagar
     

06 Aug, 2006

1 commit


15 Jul, 2006

2 commits

  • Send per-tgid data only once during exit of a thread group instead of once
    with each member thread exit.

    Currently, when a thread exits, besides its per-tid data, the per-tgid data
    of its thread group is also sent out, if its thread group is non-empty.
    The per-tgid data sent consists of the sum of per-tid stats for all
    *remaining* threads of the thread group.

    This patch modifies this sending in two ways:

    - the per-tgid data is sent only when the last thread of a thread group
    exits. This cuts down heavily on the overhead of sending/receiving
    per-tgid data, especially when other exploiters of the taskstats
    interface aren't interested in per-tgid stats

    - the semantics of the per-tgid data sent are changed. Instead of being
    the sum of per-tid data for remaining threads, the value now sent is the
    true total accumalated statistics for all threads that are/were part of
    the thread group.

    The patch also addresses a minor issue where failure of one accounting
    subsystem to fill in the taskstats structure was causing the send of
    taskstats to not be sent at all.

    The patch has been tested for stability and run cerberus for over 4 hours
    on an SMP.

    [akpm@osdl.org: bugfixes]
    Signed-off-by: Shailabh Nagar
    Signed-off-by: Balbir Singh
    Cc: Jay Lan
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Shailabh Nagar
     
  • Initialization code related to collection of per-task "delay" statistics which
    measure how long it had to wait for cpu, sync block io, swapping etc. The
    collection of statistics and the interface are in other patches. This patch
    sets up the data structures and allows the statistics collection to be
    disabled through a kernel boot parameter.

    Signed-off-by: Shailabh Nagar
    Signed-off-by: Balbir Singh
    Cc: Jes Sorensen
    Cc: Peter Chubb
    Cc: Erich Focht
    Cc: Levent Serinol
    Cc: Jay Lan
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Shailabh Nagar
     

11 Jul, 2006

1 commit


04 Jul, 2006

5 commits

  • cleanup: remove task_t and convert all the uses to struct task_struct. I
    introduced it for the scheduler anno and it was a mistake.

    Conversion was mostly scripted, the result was reviewed and all
    secondary whitespace and style impact (if any) was fixed up by hand.

    Signed-off-by: Ingo Molnar
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Ingo Molnar
     
  • Teach special (recursive) locking code to the lock validator. Has no effect
    on non-lockdep kernels.

    Signed-off-by: Ingo Molnar
    Signed-off-by: Arjan van de Ven
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Ingo Molnar
     
  • Do 'make oldconfig' and accept all the defaults for new config options -
    reboot into the kernel and if everything goes well it should boot up fine and
    you should have /proc/lockdep and /proc/lockdep_stats files.

    Typically if the lock validator finds some problem it will print out
    voluminous debug output that begins with "BUG: ..." and which syslog output
    can be used by kernel developers to figure out the precise locking scenario.

    What does the lock validator do? It "observes" and maps all locking rules as
    they occur dynamically (as triggered by the kernel's natural use of spinlocks,
    rwlocks, mutexes and rwsems). Whenever the lock validator subsystem detects a
    new locking scenario, it validates this new rule against the existing set of
    rules. If this new rule is consistent with the existing set of rules then the
    new rule is added transparently and the kernel continues as normal. If the
    new rule could create a deadlock scenario then this condition is printed out.

    When determining validity of locking, all possible "deadlock scenarios" are
    considered: assuming arbitrary number of CPUs, arbitrary irq context and task
    context constellations, running arbitrary combinations of all the existing
    locking scenarios. In a typical system this means millions of separate
    scenarios. This is why we call it a "locking correctness" validator - for all
    rules that are observed the lock validator proves it with mathematical
    certainty that a deadlock could not occur (assuming that the lock validator
    implementation itself is correct and its internal data structures are not
    corrupted by some other kernel subsystem). [see more details and conditionals
    of this statement in include/linux/lockdep.h and
    Documentation/lockdep-design.txt]

    Furthermore, this "all possible scenarios" property of the validator also
    enables the finding of complex, highly unlikely multi-CPU multi-context races
    via single single-context rules, increasing the likelyhood of finding bugs
    drastically. In practical terms: the lock validator already found a bug in
    the upstream kernel that could only occur on systems with 3 or more CPUs, and
    which needed 3 very unlikely code sequences to occur at once on the 3 CPUs.
    That bug was found and reported on a single-CPU system (!). So in essence a
    race will be found "piecemail-wise", triggering all the necessary components
    for the race, without having to reproduce the race scenario itself! In its
    short existence the lock validator found and reported many bugs before they
    actually caused a real deadlock.

    To further increase the efficiency of the validator, the mapping is not per
    "lock instance", but per "lock-class". For example, all struct inode objects
    in the kernel have inode->inotify_mutex. If there are 10,000 inodes cached,
    then there are 10,000 lock objects. But ->inotify_mutex is a single "lock
    type", and all locking activities that occur against ->inotify_mutex are
    "unified" into this single lock-class. The advantage of the lock-class
    approach is that all historical ->inotify_mutex uses are mapped into a single
    (and as narrow as possible) set of locking rules - regardless of how many
    different tasks or inode structures it took to build this set of rules. The
    set of rules persist during the lifetime of the kernel.

    To see the rough magnitude of checking that the lock validator does, here's a
    portion of /proc/lockdep_stats, fresh after bootup:

    lock-classes: 694 [max: 2048]
    direct dependencies: 1598 [max: 8192]
    indirect dependencies: 17896
    all direct dependencies: 16206
    dependency chains: 1910 [max: 8192]
    in-hardirq chains: 17
    in-softirq chains: 105
    in-process chains: 1065
    stack-trace entries: 38761 [max: 131072]
    combined max dependencies: 2033928
    hardirq-safe locks: 24
    hardirq-unsafe locks: 176
    softirq-safe locks: 53
    softirq-unsafe locks: 137
    irq-safe locks: 59
    irq-unsafe locks: 176

    The lock validator has observed 1598 actual single-thread locking patterns,
    and has validated all possible 2033928 distinct locking scenarios.

    More details about the design of the lock validator can be found in
    Documentation/lockdep-design.txt, which can also found at:

    http://redhat.com/~mingo/lockdep-patches/lockdep-design.txt

    [bunk@stusta.de: cleanups]
    Signed-off-by: Ingo Molnar
    Signed-off-by: Arjan van de Ven
    Signed-off-by: Adrian Bunk
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Ingo Molnar
     
  • Accurate hard-IRQ-flags and softirq-flags state tracing.

    This allows us to attach extra functionality to IRQ flags on/off
    events (such as trace-on/off).

    Signed-off-by: Ingo Molnar
    Signed-off-by: Arjan van de Ven
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Ingo Molnar
     
  • Generic lock debugging:

    - generalized lock debugging framework. For example, a bug in one lock
    subsystem turns off debugging in all lock subsystems.

    - got rid of the caller address passing (__IP__/__IP_DECL__/etc.) from
    the mutex/rtmutex debugging code: it caused way too much prototype
    hackery, and lockdep will give the same information anyway.

    - ability to do silent tests

    - check lock freeing in vfree too.

    - more finegrained debugging options, to allow distributions to
    turn off more expensive debugging features.

    There's no separate 'held mutexes' list anymore - but there's a 'held locks'
    stack within lockdep, which unifies deadlock detection across all lock
    classes. (this is independent of the lockdep validation stuff - lockdep first
    checks whether we are holding a lock already)

    Here are the current debugging options:

    CONFIG_DEBUG_MUTEXES=y
    CONFIG_DEBUG_LOCK_ALLOC=y

    which do:

    config DEBUG_MUTEXES
    bool "Mutex debugging, basic checks"

    config DEBUG_LOCK_ALLOC
    bool "Detect incorrect freeing of live mutexes"

    Signed-off-by: Ingo Molnar
    Signed-off-by: Arjan van de Ven
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Ingo Molnar
     

01 Jul, 2006

1 commit


28 Jun, 2006

2 commits


27 Jun, 2006

2 commits

  • After the previous patch SIGNAL_GROUP_EXIT implies a pending SIGKILL, we
    can remove this check from copy_process() because we already checked
    !signal_pending().

    Signed-off-by: Oleg Nesterov
    Cc: Roland McGrath
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Oleg Nesterov
     
  • To keep the dcache from filling up with dead /proc entries we flush them on
    process exit. However over the years that code has gotten hairy with a
    dentry_pointer and a lock in task_struct and misdocumented as a correctness
    feature.

    I have rewritten this code to look and see if we have a corresponding entry in
    the dcache and if so flush it on process exit. This removes the extra fields
    in the task_struct and allows me to trivially handle the case of a
    /proc//task/ entry as well as the current /proc/ entries.

    Signed-off-by: Eric W. Biederman
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Eric W. Biederman
     

26 Jun, 2006

1 commit

  • The pacct facility need an i/o operation when an accounting record is
    generated. There is a possibility to wake OOM killer up. If OOM killer is
    activated, it kills some processes to make them release process memory
    regions.

    But acct_process() is called in the killed processes context before calling
    exit_mm(), so those processes cannot release own memory. In the results, any
    processes stop in this point and it finally cause a system stall.

    KaiGai Kohei
     

23 Jun, 2006

2 commits


01 May, 2006

1 commit


20 Apr, 2006

2 commits

  • It's really task private, so clear that field on fork after copying
    task structure.

    Signed-off-by: Jens Axboe

    Jens Axboe
     
  • While we can currently walk through thread groups, process groups, and
    sessions with just the rcu_read_lock, this opens the door to walking the
    entire task list.

    We already have all of the other RCU guarantees so there is no cost in
    doing this, this should be enough so that proc can stop taking the
    tasklist lock during readdir.

    prev_task was killed because it has no users, and using it will miss new
    tasks when doing an rcu traversal.

    Signed-off-by: Eric W. Biederman
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Eric W. Biederman
     

15 Apr, 2006

1 commit

  • Somehow in the midst of dotting i's and crossing t's during
    the merge up to rc1 we wound up keeping __put_task_struct_cb
    when it should have been killed as it no longer has any users.
    Sorry I probably should have caught this while it was
    still in the -mm tree.

    Having the old code there gets confusing when reading
    through the code and trying to understand what is
    happening.

    Signed-off-by: Eric W. Biederman
    Signed-off-by: Linus Torvalds

    Eric W. Biederman
     

01 Apr, 2006

3 commits

  • Wrong error path in dup_fd() - it should return NULL on error,
    not an address of already freed memory :/

    Triggered by OpenVZ stress test suite.

    What is interesting is that it was causing different oopses in RCU like
    below:
    Call Trace:
    [] rcu_do_batch+0x2c/0x80
    [] rcu_process_callbacks+0x3d/0x70
    [] tasklet_action+0x73/0xe0
    [] __do_softirq+0x10a/0x130
    [] do_softirq+0x4f/0x60
    =======================
    [] smp_apic_timer_interrupt+0x77/0x110
    [] apic_timer_interrupt+0x1c/0x24
    Code: Bad EIP value.
    Kernel panic - not syncing: Fatal exception in interrupt

    Signed-Off-By: Pavel Emelianov
    Signed-Off-By: Dmitry Mishin
    Signed-Off-By: Kirill Korotaev
    Signed-Off-By: Linus Torvalds

    Kirill Korotaev
     
  • Simplifies the code, reduces the need for 4 pid hash tables, and makes the
    code more capable.

    In the discussions I had with Oleg it was felt that to a large extent the
    cleanup itself justified the work. With struct pid being dynamically
    allocated meant we could create the hash table entry when the pid was
    allocated and free the hash table entry when the pid was freed. Instead of
    playing with the hash lists when ever a process would attach or detach to a
    process.

    For myself the fact that it gave what my previous task_ref patch gave for free
    with simpler code was a big win. The problem is that if you hold a reference
    to struct task_struct you lock in 10K of low memory. If you do that in a user
    controllable way like /proc does, with an unprivileged but hostile user space
    application with typical resource limits of 1000 fds and 100 processes I can
    trigger the OOM killer by consuming all of low memory with task structs, on a
    machine wight 1GB of low memory.

    If I instead hold a reference to struct pid which holds a pointer to my
    task_struct, I don't suffer from that problem because struct pid is 2 orders
    of magnitude smaller. In fact struct pid is small enough that most other
    kernel data structures dwarf it, so simply limiting the number of referring
    data structures is enough to prevent exhaustion of low memory.

    This splits the current struct pid into two structures, struct pid and struct
    pid_link, and reduces our number of hash tables from PIDTYPE_MAX to just one.
    struct pid_link is the per process linkage into the hash tables and lives in
    struct task_struct. struct pid is given an indepedent lifetime, and holds
    pointers to each of the pid types.

    The independent life of struct pid simplifies attach_pid, and detach_pid,
    because we are always manipulating the list of pids and not the hash table.
    In addition in giving struct pid an indpendent life it makes the concept much
    more powerful.

    Kernel data structures can now embed a struct pid * instead of a pid_t and
    not suffer from pid wrap around problems or from keeping unnecessarily
    large amounts of memory allocated.

    Signed-off-by: Eric W. Biederman
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Eric W. Biederman
     
  • This just got nuked in mainline. Bring it back because Eric's patches use it.

    Cc: "Eric W. Biederman"
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrew Morton
     

29 Mar, 2006

3 commits

  • Move 'tsk->sighand = NULL' from cleanup_sighand() to __exit_signal(). This
    makes the exit path more understandable and allows us to do
    cleanup_sighand() outside of ->siglock protected section.

    Signed-off-by: Oleg Nesterov
    Cc: "Eric W. Biederman"
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Oleg Nesterov
     
  • Eric W. Biederman wrote:
    >
    > Ok. SUSV3/Posix is clear, fork is atomic with respect
    > to signals. Either a signal comes before or after a
    > fork but not during. (See the rationale section).
    > http://www.opengroup.org/onlinepubs/000095399/functions/fork.html
    >
    > The tasklist_lock does not stop forks from adding to a process
    > group. The forks stall while the tasklist_lock is held, but a fork
    > that began before we grabbed the tasklist_lock simply completes
    > afterwards, and the child does not receive the signal.

    This also means that SIGSTOP or sig_kernel_coredump() signal can't
    be delivered to pgrp/session reliably.

    With this patch copy_process() returns -ERESTARTNOINTR when it
    detects a pending signal, fork() will be restarted transparently
    after handling the signals.

    This patch also deletes now unneeded "group_stop_count > 0" check,
    copy_process() can no longer succeed while group stop in progress.

    Signed-off-by: Oleg Nesterov
    Acked-By: Eric Biederman
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Oleg Nesterov
     
  • This patch kills PIDTYPE_TGID pid_type thus saving one hash table in
    kernel/pid.c and speeding up subthreads create/destroy a bit. It is also a
    preparation for the further tref/pids rework.

    This patch adds 'struct list_head thread_group' to 'struct task_struct'
    instead.

    We don't detach group leader from PIDTYPE_PID namespace until another
    thread inherits it's ->pid == ->tgid, so we are safe wrt premature
    free_pidmap(->tgid) call.

    Currently there are no users of find_task_by_pid_type(PIDTYPE_TGID).
    Should the need arise, we can use find_task_by_pid()->group_leader.

    Signed-off-by: Oleg Nesterov
    Acked-By: Eric Biederman
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Oleg Nesterov