Sun Microsystems, Inc.  Oracle System Handbook - ISO 7.0 May 2018 Internal/Partner Edition
   Home | Current Systems | Former STK Products | EOL Systems | Components | General Info | Search | Feedback

Asset ID: 1-71-1593112.1
Update Date:2018-05-01
Keywords:

Solution Type  Technical Instruction Sure

Solution  1593112.1 :   How to map an LDOM guest CPU to a Physical Board [SPARC T3-4/T4-4/T5-4 and T5-8]  


Related Items
  • SPARC T3-4
  •  
  • SPARC T5-4
  •  
  • SPARC T4-4
  •  
  • SPARC T5-8
  •  
Related Categories
  • PLA-Support>Sun Systems>SPARC>CMT>SN-SPARC: T4
  •  




In this Document
Goal
Solution


Created from <SR 3-7845790504>

Applies to:

SPARC T5-4 - Version All Versions to All Versions [Release All Releases]
SPARC T3-4 - Version All Versions to All Versions [Release All Releases]
SPARC T4-4 - Version All Versions to All Versions [Release All Releases]
SPARC T5-8 - Version All Versions to All Versions [Release All Releases]
Oracle Solaris on SPARC (64-bit)

Goal

The purpose of this document is to assist engineers in mapping an LDOM guest VCPU to a physical Processor Module (PM).

Solution

To physically locate where an LDOM guest CPU thread is running, the VCPU needs to be physically mapped to a physical Core ID. The following lines will briefly describe the concept of  CoreID,CPUSET and VCPU
     
Core ID (CID) - Core ID (CID) is unique identifier for each CPU core on  the system. The total number of cores is dependent on the SPARC architecture. A SPARC T3 has 16 (S2) Cores, SPARC T4 has 8 (S3)Cores and a SPARC T5 CPU has 16 (S3) Cores. The SPARC T3-4 System has a total of 64 cores [0-63], SPARC T4-4 System has a total of 32 cores [0-31], SPARC T5-4 System has a total of 64 cores [0-63] and SPARC T5-8 System has a total of 128 cores [0-127].
    
CPUSET - Represents each CPU threads found on a CPU core. Both the S2 and S3 cores have 8 cpu threads each. A 4 CPU/64 (S2) Core SPARC T3-4 and a 4 CPU/64 (S3) Core SPARC T5-4 system has a total of 512 CPU threads, while a 4 CPU/32 (S3) core T4-4 system has a total of 256 CPU threads and an 8 CPU/128 (S3) core T5-8 has a total of 1024 threads on the system.  
    
VCPU - When a CPUSET is assigned to a guest domain, the resource allocated is called a Virtual CPU. The total number of VCPU assigned to a particular guest domain may depend on how the guest domain was configured, or may also depend if VCPU resources have been dynamically (DR) assigned or removed.   
        
Depending on system type and configuration, the number of CPU cores and where they are physically located on the system may vary.

 


    
    Table A (Core ID Identification Table )

 
 System Type and ConfigurationCPU Core ID ( CID) CPUSET
SPARC T3-4 ( FULL )    
Processor Module 0 (PM0) CID 0 to 31 CPUSET 0 to 255
Processor Module 1 (PM1) CID 32 to 63 CPUSET 256 to 511
SPARC T3-4 ( HALF )    
Processor Module 0 (PM0) CID 0 to 31 CPUSET 0 to 255
Filler Module 1 (FM1) EMPTY EMPTY
SPARC T4-4 ( FULL )    
Processor Module 0 (PM0) CID 0 to 15 CPUSET 0 to 127
Processor Module 1 (PM1) CID 16 to 31 CPUSET 128 to 255
SPARC T4-4 ( HALF )    
Processor Module 0 (PM0) CID 0 to 15 CPUSET 0 to 127
Filler Module 1 (FM1) EMPTY EMPTY
SPARC T5-4 ( FULL )    
Processor Module 0 (PM0) CID 0 to 31 CPUSET 0 to 255
Processor Module 1 (PM1) CID 32 to 63 CPUSET 256 to 511
SPARC T5-8 ( FULL )    
Processor Module 0 (PM0) CID 0 to 31 CPUSET 0 to 255
Processor Module 1 (PM1) CID 32 to 63 CPUSET 256 to 511
Processor Module 2 (PM2) CID 64 to 95 CPUSET 512 to 767
Processor Module 3 (PM3) CID 96 to 127 CPUSET 768 to 1023
SPARC T5-8 ( HALF ) Only for SuperCluster Configuration    
Processor Module 0 (PM0) CID 0 to 31 CPUSET 0 to 255 
Filler Module 1 (FM1)  EMPTY  EMPTY 
Filler Module 2 (FM2)  EMPTY  EMPTY 
Processor Module 3 (PM3) CID 96 to 127  CPUSET 768 to 1023 

 

 
For the S3 core (T4 and T5) based systems, the threading mode of the CPU's may be configured for different workloads.
   
The default CPU performance is set for max-throughput to allow the CPU to execute the highest possible concurrent hardware threads. If the CPU constraints is not specifically defined then it is assumed that the CPU setting for the LDOM guest is set for max-throughtput. All the 8 CPU threads of a core assigned to an LDOM guest is activated on a max-throughput setting.      
    
On a SPARC T4 and T5  we have introduced threading mode called max-ipc for CPU intensive application. The threading mode of the CPU may be changed dymanically with the "ldm add-core" and "ldm set-core" command. Only a single CPU thread is enabled per core for LDOM guest configured with max-ipc setting. 
    
Example 1 

Note: LDOM guest with default "max-througput" setting
   
   
    # ldm ls-dom -o core -l guestdom
    NAME
    guestdom
    
    CORE
        CID    CPUSET
        9      (72, 73, 74, 75, 76, 77, 78, 79)
        10     (80, 81, 82, 83, 84, 85, 86, 87)
 
    # ldm ls-dom -o cpu -l guestdom    
    NAME
    guestdom
       
    VCPU
        VID    PID    CID    UTIL NORM STRAND
        0      72     9      5.1% 5.1%   100%
        1      73     9      0.0% 0.0%   100%
        2      74     9      0.0% 0.0%   100%
        3      75     9      0.0% 0.0%   100%
        4      76     9      0.0% 0.0%   100%
        5      77     9      0.0% 0.0%   100%
        6      78     9      0.0% 0.0%   100%
        7      79     9      0.0% 0.0%   100%
        8      80     10     0.0% 0.0%   100%
        9      81     10     0.0% 0.0%   100%
        10     82     10     0.0% 0.0%   100%
        11     83     10     0.0% 0.0%   100%
        12     84     10     0.0% 0.0%   100%
        13     85     10     0.0% 0.0%   100%
        14     86     10     0.0% 0.0%   100%
        15     87     10     0.0% 0.0%   100%
   
    # ldm ls-dom -o re guestdom    
    NAME
    guestdom

    CONSTRAINT
        cpu=whole-core
        max-cores=unlimited
        threading=max-throughput     <<<<<<<<<<<<<< if this is not defined its assumed as max-throughput
        physical-bindings=core,memory

    
Note: Based on the example above mapping it with Core ID Identification Table ( Table A), threads running on CID 9 and CID 10 should be physically mapped to Processor Module 0 (PM0)
    
    
LDOM Guest prtdiag (max-throughput)
   
    System Configuration:  Oracle Corporation  sun4v SPARC T4-4
    Memory size: 65536 Megabytes
   
    ================================ Virtual CPUs     ================================
   
   
    CPU ID Frequency Implementation         Status
    ------ --------- ---------------------- -------
    0      2998 MHz  SPARC-T4               on-line 
    1      2998 MHz  SPARC-T4               on-line 
    2      2998 MHz  SPARC-T4               on-line 
    3      2998 MHz  SPARC-T4               on-line 
    4      2998 MHz  SPARC-T4               on-line 
    5      2998 MHz  SPARC-T4               on-line 
    6      2998 MHz  SPARC-T4               on-line 
    7      2998 MHz  SPARC-T4               on-line 
    8      2998 MHz  SPARC-T4               on-line 
    9      2998 MHz  SPARC-T4               on-line 
    10     2998 MHz  SPARC-T4               on-line 
    11     2998 MHz  SPARC-T4               on-line 
    12     2998 MHz  SPARC-T4               on-line 
    13     2998 MHz  SPARC-T4               on-line 
    14     2998 MHz  SPARC-T4               on-line 
    15     2998 MHz  SPARC-T4               on-line 
   
    ....
    
    
Note: You cannot identify the CID mappings from an LDOM guest 
    
    
      
Example 2

 
Note: LDOM guest with max-ipc setting
    

    # ldm list -l guestmaxipc
    NAME             STATE      FLAGS   CONS    VCPU  MEMORY   UTIL      NORM  UPTIME
    guestmaxipc      active     -n----  5000    32    64G      1.6%      1.6%  1h 28m
    

    # ldm ls-dom -o core guestmaxipc
    NAME
    guestmaxipc 
 
    CORE
        CID    CPUSET
        9      (72, 73, 74, 75, 76, 77, 78, 79)
        10     (80, 81, 82, 83, 84, 85, 86, 87)
        11     (88, 89, 90, 91, 92, 93, 94, 95)
        12     (96, 97, 98, 99, 100, 101, 102, 103)
       
    #ldm ls-dom -o cpu -l guestmaxipc
    NAME
    guestmaxipc

    VCPU
        VID    PID    CID    UTIL NORM STRAND
        0      72     9      5.2% 5.2%   100%      <<<<<<<<<<<<<<<<<<<<<     Single Thread Core ID 9
        1      73     9               0    0   100%
        2      74     9               0    0   100%
        3      75     9               0    0   100%
        4      76     9               0    0   100%
        5      77     9               0    0   100%
        6      78     9               0    0   100%
        7      79     9               0    0   100%
        8      80     10     0.0% 0.0%  100%     <<<<<<<<<<<<<<<<<<<<<     Single Thread Core ID 10
        9      81     10             0    0   100%
        10     82     10            0    0   100%
        11     83     10            0    0   100%
        12     84     10            0    0   100%
        13     85     10            0    0   100%
        14     86     10            0    0   100%
        15     87     10            0    0   100%
        16     88     11     0.0% 0.0% 100%     <<<<<<<<<<<<<<<<<<<<<     Single Thread Core ID 11
        17     89     11            0    0   100%
        18     90     11            0    0   100%
        19     91     11            0    0   100%
        20     92     11            0    0   100%
        21     93     11            0    0   100%
        22     94     11            0    0   100%
        23     95     11            0    0   100%
        24     96     12     0.3% 0.2% 100%     <<<<<<<<<<<<<<<<<<<<<     Single Thread Core ID 12
        25     97     12            0    0   100%
        26     98     12            0    0   100%
        27     99     12            0    0   100%
        28     100    12           0    0   100%
        29     101    12           0    0   100%
        30     102    12           0    0   100%
        31     103    12           0    0   100%
        
    
 

    # ldm ls-dom -o re guestmaxipc   
    NAME
    guestmaxipc

    
    CONSTRAINT
        cpu=whole-core         <<<<<<<<<<<<<<<<<<<<<< Required for max-ipc
        max-cores=4
        threading=max-ipc     <<<<<<<<<<<<<<<<<<<<<< Set to max-ipc
        physical-bindings=core,memory
    
    

Note: An LDOM guest domain will have sequential virtual CPU ID when the threading is set to "max-throughput", but when threading option for the guest domain is set to max-ipc, only the first thread in the CPU core is listed in the prtdiag virtual CPU ID column. 
   
   
    LDOM Guest prtdiag (max-ipc)
   
    System Configuration:  Oracle Corporation  sun4v SPARC T4-4
    Memory size: 65536 Megabytes
   
    ================================ Virtual CPUs     ================================
   
   
    CPU ID Frequency Implementation         Status
    ------ --------- ---------------------- -------
    0      2998 MHz  SPARC-T4               on-line 
    8      2998 MHz  SPARC-T4               on-line 
    16     2998 MHz  SPARC-T4               on-line 
    24     2998 MHz  SPARC-T4               on-line  
    
    

Note: Only 4 CPU thread is made available to the Guest LDOM, each thread represents a physical CPU Core.  Based on the example 2 (ldm ls-dom -o cpu -l guestmaxipc), we can physically map LDOM guest VCPU 0 to CID 9, LDOM guest VCPU 8 to CID 10, LDOM guest VCPU 16 to CID 11 and LDOM guest VCPU 24 to CID 12. Based on Core ID Identification Table ( Table A) CID 9, CDI 10, CID 11 and CID 12 should be physically mapped to Processor Module 0 (PM0).

 


Note: In the example below kernel team has identified that LDOM guest VCPU 176 on a SPARC T4-4 is faulty, they requested TSC SPARC to assit in locating the correct Processor Module. This example is only used as an additional  illustration on how to map an LDOM guest VCPU to a physical board. Do not replaced the Processor Module unless this is confirmed by kernel and TSC SPARC backline.
   
   

    Sep 14 18:30:17 app1-t4 inetd[1934]: [ID 317013 daemon.notice]     bgssd[13186] from XX.XX.XX.XX 53714
    Sep 14 18:45:20 app1-t4 inetd[1934]: [ID 317013 daemon.notice]     bgssd[17949] from XX.XX.XX.XX 33429
    Sep 14 19:28:46 app1-t4 unix: [ID 536548 kern.notice] CPUIDs:
    Sep 14 19:28:46 app1-t4 unix: [ID 152697 kern.notice]  0xb0
    Sep 14 19:28:46 app1-t4 unix: [ID 350512 kern.notice] panic: failed     to stop cpu176
    Sep 14 19:28:46 app1-t4 unix: [ID 836849 kern.notice]     #012#015panic[cpu32]/thread=300331d8dc0:
    Sep 14 19:28:46 app1-t4 unix: [ID 990398 kern.notice] xt_sync:     timeout
    Sep 14 19:28:46 app1-t4 unix: [ID 100000 kern.notice] #012
    Sep 14 19:28:46 app1-t4 genunix: [ID 723222 kern.notice]     000002a103644e30 unix:xt_sync+370 (10bd800, 16, 2a103644f48,     3336c425f6aec, 16, 1913e00)
    Sep 14 19:28:46 app1-t4 genunix: [ID 179002 kern.notice]   %l0-3:     00000000000000b0 0003336c425f1650 0003336e966b4ef6     0003336e966b4eec#012  %l4-7: 000002a103644ff8 00000000000000b9     0000000000000000 00000000010bd9a0
    Sep 14 19:28:46 app1-t4 genunix: [ID 723222 kern.notice]     000002a103645150 unix:sfmmu_hblks_list_purge+194 (2a1036456a8, 20,     1, 100000000, 2a103645290, 0)
    Sep 14 19:28:46 app1-t4 genunix: [ID 179002 kern.notice]   %l0-3:     0000000000000000 0000000000000001 fffffffffffffff8     0101010001010101#012  %l4-7: 000002a103645290 000002a103645248     0101010001010101 0101010101010101
    Sep 14 19:28:46 app1-t4 genunix: [ID 723222 kern.notice]     000002a1036452d0 unix:hat_unload_callback+788 (7fc00, 2a103645468,     0, 2a103645568, 0, 3000c20e480)
    Sep 14 19:28:46 app1-t4 genunix: [ID 179002 kern.notice]   %l0-3:     ffffffff7d802000 0000000000000000 0000000000000001     ffffffff7d8007ff#012  %l4-7: 0000000000000000 00000300db49e788     ffffffff7d802000 0000070044cb7100
    Sep 14 19:28:46 app1-t4 genunix: [ID 723222 kern.notice]     000002a1036456b0 genunix:segvn_unmap+290 (300cf374c30,     ffffffff7d800000, 2000, 0, 1, 3005ada1538)
    Sep 14 19:28:46 app1-t4 genunix: [ID 179002 kern.notice]   %l0-3:     00000000010b6400 0000000000000002 0000000000000000     ffffffff7e13e000#012  %l4-7: 0000000000002000 0000000000000002     00000600655e83c8 ffffffff7d802000
    Sep 14 19:28:46 app1-t4 genunix: [ID 723222 kern.notice]     000002a1036457a0 genunix:as_unmap+210 (600655e83c8, 600502f0618,     600655e83f8, ffffffff7d802000, 0, 193c0a0)
    Sep 14 19:28:46 app1-t4 genunix: [ID 179002 kern.notice]   %l0-3:     000000000193b9b8 0000000000000004 0000000000000001     0000000000000000#012  %l4-7: ffffffff7d800000 0000000000002000     0000000000002000 00000300cf374c30
    Sep 14 19:28:46 app1-t4 genunix: [ID 723222 kern.notice]     000002a103645880 shmsys:shm_detach+18 (30031b759a0, 600641901c0, 0,     600641901c0, 300306b31c0, 7016bc00)
    Sep 14 19:28:46 app1-t4 genunix: [ID 179002 kern.notice]   %l0-3:     00000600641901c0 0000000000000000 0000000000000000     0000000000000001#012  %l4-7: 0000000000000000 0000000000000001     0000000000000000 0000000000000000
    Sep 14 19:28:46 app1-t4 genunix: [ID 723222 kern.notice]     000002a103645930 shmsys:shmdt+84 (ffffffff7d800000, 3003328ac40,     30031b759a0, 0, ffffffff7d800000, 600641901c0)
    Sep 14 19:28:46 app1-t4 genunix: [ID 179002 kern.notice]   %l0-3:     000003003328ac40 0000000000000004 0000000000000000     00000300cf374c30#012  %l4-7: 0003336c425e0332 000000000000a532     ffffffff7d802000 00000000001d0bc8
    Sep 14 19:28:46 app1-t4 genunix: [ID 723222 kern.notice]     000002a103645a20 shmsys:shmsys+60 (2, ffffffff7d800000, 0, 0,     7bfbb400, 78)
    Sep 14 19:28:46 app1-t4 genunix: [ID 179002 kern.notice]   %l0-3:     0000000000000000 0000000000000000 00000000da340000     000000000000da34#012  %l4-7: 0000000000000001 0000000000000000     0000000000000000 000000007bfbb778
    Sep 14 19:28:46 app1-t4 unix: [ID 100000 kern.notice]
    Sep 14 19:28:46 app1-t4 genunix: [ID 672855 kern.notice] syncing     file systems...
    Sep 14 19:28:46 app1-t4 genunix: [ID 904073 kern.notice]  done
    Sep 14 19:28:46 app1-t4 genunix: [ID 111219 kern.notice] dumping to     /dev/zvol/dsk/rpool/dump, offset 65536, content: kernel
    Sep 14 19:28:46 app1-t4 genunix: [ID 100000 kern.notice]
    Sep 14 19:28:46 app1-t4 genunix: [ID 665016 kern.notice] #015100%     done: 646273 pages dumped,
    Sep 14 19:28:46 app1-t4 genunix: [ID 851671 kern.notice] dump     succeeded
    
    

Note: Based on the LDOM guest prtdiag output the Guest is configured  with "max-ipc" throughput setting.
   
   
   
System Configuration:  Oracle Corporation  sun4v SPARC T4-4
    Memory size: 65536 Megabytes
   
    ================================ Virtual CPUs     ================================
   
   
    CPU ID Frequency Implementation         Status
    ------ --------- ---------------------- -------
    0      2998 MHz  SPARC-T4               on-line 
    8      2998 MHz  SPARC-T4               on-line 
    16     2998 MHz  SPARC-T4               on-line 
    24     2998 MHz  SPARC-T4               on-line 
    32     2998 MHz  SPARC-T4               on-line 
    40     2998 MHz  SPARC-T4               on-line 
    48     2998 MHz  SPARC-T4               on-line 
    56     2998 MHz  SPARC-T4               on-line 
    64     2998 MHz  SPARC-T4               on-line 
    72     2998 MHz  SPARC-T4               on-line 
    80     2998 MHz  SPARC-T4               on-line 
    88     2998 MHz  SPARC-T4               on-line 
    96     2998 MHz  SPARC-T4               on-line 
    104    2998 MHz  SPARC-T4               on-line 
    112    2998 MHz  SPARC-T4               on-line 
    120    2998 MHz  SPARC-T4               on-line 
    128    2998 MHz  SPARC-T4               on-line 
    136    2998 MHz  SPARC-T4               on-line 
    144    2998 MHz  SPARC-T4               on-line 
    152    2998 MHz  SPARC-T4               on-line 
    160    2998 MHz  SPARC-T4               on-line 
    168    2998 MHz  SPARC-T4               on-line 
    176    2998 MHz  SPARC-T4               on-line      <<<<<<<<<<<<< CPU 176
    184    2998 MHz  SPARC-T4               on-line 
   
    ================================= IO Devices     =================================
    Slot +            Bus   Name +                                Model        Speed
    Status            Type      Path                                                
------------------------------------------------------------------------------
    
    

Note: Ldom guest configuration may be collected with the "ldm list  -l < guestdomain>"  command or from <explorer directory>/ldom/ldm_ls-dom_-l.out
   
   
   
# ldm list -l app1
   
    NAME             STATE      FLAGS   CONS    VCPU  MEMORY   UTIL      UPTIME
    app1           active     -n----  5001    192   64G       88%  4h     35m
   
    SOFTSTATE
    Solaris running
   
    UUID
        9d70f081-feba-6c6f-f470-d1c79f3cxxxx
   
    MAC
        00:14:4f:fb:bx:xx
   
    HOSTID
        0x84fbbxxx
   
    CONTROL
        failure-policy=ignore
        extended-mapin-space=off
   
    DEPENDENCY
        master=
   
    CORE
        CID    CPUSET
        2      (16, 17, 18, 19, 20, 21, 22, 23)
        3      (24, 25, 26, 27, 28, 29, 30, 31)
        4      (32, 33, 34, 35, 36, 37, 38, 39)
        5      (40, 41, 42, 43, 44, 45, 46, 47)
        6      (48, 49, 50, 51, 52, 53, 54, 55)
        7      (56, 57, 58, 59, 60, 61, 62, 63)
        8      (64, 65, 66, 67, 68, 69, 70, 71)
        9      (72, 73, 74, 75, 76, 77, 78, 79)
        10     (80, 81, 82, 83, 84, 85, 86, 87)
        11     (88, 89, 90, 91, 92, 93, 94, 95)
        12     (96, 97, 98, 99, 100, 101, 102, 103)
        13     (104, 105, 106, 107, 108, 109, 110, 111)
        14     (112, 113, 114, 115, 116, 117, 118, 119)
        15     (120, 121, 122, 123, 124, 125, 126, 127)
        16     (128, 129, 130, 131, 132, 133, 134, 135)
        17     (136, 137, 138, 139, 140, 141, 142, 143)
        18     (144, 145, 146, 147, 148, 149, 150, 151)
        19     (152, 153, 154, 155, 156, 157, 158, 159)
        20     (160, 161, 162, 163, 164, 165, 166, 167)
        21     (168, 169, 170, 171, 172, 173, 174, 175)
        22     (176, 177, 178, 179, 180, 181, 182, 183)
        23     (184, 185, 186, 187, 188, 189, 190, 191)
        24     (192, 193, 194, 195, 196, 197, 198, 199)     <<<<<<<<<< CPUSET 192 is Core ID 24
        25     (200, 201, 202, 203, 204, 205, 206, 207)
   
    VCPU
        VID    PID    CID    UTIL STRAND
        0      16     2      0.7%   100%
        1      17     2         0   100%
        2      18     2         0   100%
        3      19     2         0   100%
        4      20     2         0   100%
        5      21     2         0   100%
        6      22     2         0   100%
        7      23     2         0   100%
        8      24     3      0.3%   100%
        9      25     3         0   100%
      
    < removed lines in between >
   
        174    190    23        0   100%
        175    191    23        0   100%
        176    192    24     0.3%   100%     <<<<<<<<<<<<<<<<<<      VCPU 176 is CPUSET(PID) 192 and Core ID 24
        177    193    24        0   100%
        178    194    24        0   100%
        179    195    24        0   100%
        180    196    24        0   100%
        181    197    24        0   100%
        182    198    24        0   100%
        183    199    24        0   100%
        184    200    25     0.2%   100%
        185    201    25        0   100%
        186    202    25        0   100%
        187    203    25        0   100%
        188    204    25        0   100%
        189    205    25        0   100%
        190    206    25        0   100%
        191    207    25        0   100%
   
    MEMORY
        RA               PA               SIZE           
        0x20000000       0x420000000      64G
   
    CONSTRAINT
        whole-core
        max-cores=24
        threading=max-ipc <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
   
    VARIABLES
        auto-boot?=false
        keyboard-layout=US-English
   
    < removed lines after >
    
    
 Note: Based on the example above VCPU 176 is CPUSET(PID) 192 and Core ID 24. Core ID Identification Table ( Table A ) would confirm that Core ID 24 on a T4-4 is Processor Module 1 ( PM1).

 


Attachments
This solution has no attachment
  Copyright © 2018 Oracle, Inc.  All rights reserved.
 Feedback