Commit d3189545ee69527e949769b89a4cbb331de97b4a

Authored by Stuart Hopkins
Committed by Florian Tobias Schandinat
1 parent def7660868

udlfb: Add module option to do without shadow framebuffer

By default, udlfb allocates a 2nd buffer to shadow what's across
the bus on the USB device.  It can operate without this shadow,
but then it cannot tell which pixels have changed, and must send all.

Saves host memory, but worsens the USB 2.0 bus bottleneck.

This option allows users in very low memory situations (e.g.
bifferboard) to optionally turn off this shadow framebuffer.

Signed-off-by: Stuart Hopkins <stuart@linux-depot.com>
Signed-off-by: Bernie Thompson <bernie@plugable.com>
Signed-off-by: Florian Tobias Schandinat <FlorianSchandinat@gmx.de>

Showing 2 changed files with 13 additions and 2 deletions Inline Diff

Documentation/fb/udlfb.txt
1 1
2 What is udlfb? 2 What is udlfb?
3 =============== 3 ===============
4 4
5 This is a driver for DisplayLink USB 2.0 era graphics chips. 5 This is a driver for DisplayLink USB 2.0 era graphics chips.
6 6
7 DisplayLink chips provide simple hline/blit operations with some compression, 7 DisplayLink chips provide simple hline/blit operations with some compression,
8 pairing that with a hardware framebuffer (16MB) on the other end of the 8 pairing that with a hardware framebuffer (16MB) on the other end of the
9 USB wire. That hardware framebuffer is able to drive the VGA, DVI, or HDMI 9 USB wire. That hardware framebuffer is able to drive the VGA, DVI, or HDMI
10 monitor with no CPU involvement until a pixel has to change. 10 monitor with no CPU involvement until a pixel has to change.
11 11
12 The CPU or other local resource does all the rendering; optinally compares the 12 The CPU or other local resource does all the rendering; optinally compares the
13 result with a local shadow of the remote hardware framebuffer to identify 13 result with a local shadow of the remote hardware framebuffer to identify
14 the minimal set of pixels that have changed; and compresses and sends those 14 the minimal set of pixels that have changed; and compresses and sends those
15 pixels line-by-line via USB bulk transfers. 15 pixels line-by-line via USB bulk transfers.
16 16
17 Because of the efficiency of bulk transfers and a protocol on top that 17 Because of the efficiency of bulk transfers and a protocol on top that
18 does not require any acks - the effect is very low latency that 18 does not require any acks - the effect is very low latency that
19 can support surprisingly high resolutions with good performance for 19 can support surprisingly high resolutions with good performance for
20 non-gaming and non-video applications. 20 non-gaming and non-video applications.
21 21
22 Mode setting, EDID read, etc are other bulk or control transfers. Mode 22 Mode setting, EDID read, etc are other bulk or control transfers. Mode
23 setting is very flexible - able to set nearly arbitrary modes from any timing. 23 setting is very flexible - able to set nearly arbitrary modes from any timing.
24 24
25 Advantages of USB graphics in general: 25 Advantages of USB graphics in general:
26 26
27 * Ability to add a nearly arbitrary number of displays to any USB 2.0 27 * Ability to add a nearly arbitrary number of displays to any USB 2.0
28 capable system. On Linux, number of displays is limited by fbdev interface 28 capable system. On Linux, number of displays is limited by fbdev interface
29 (FB_MAX is currently 32). Of course, all USB devices on the same 29 (FB_MAX is currently 32). Of course, all USB devices on the same
30 host controller share the same 480Mbs USB 2.0 interface. 30 host controller share the same 480Mbs USB 2.0 interface.
31 31
32 Advantages of supporting DisplayLink chips with kernel framebuffer interface: 32 Advantages of supporting DisplayLink chips with kernel framebuffer interface:
33 33
34 * The actual hardware functionality of DisplayLink chips matches nearly 34 * The actual hardware functionality of DisplayLink chips matches nearly
35 one-to-one with the fbdev interface, making the driver quite small and 35 one-to-one with the fbdev interface, making the driver quite small and
36 tight relative to the functionality it provides. 36 tight relative to the functionality it provides.
37 * X servers and other applications can use the standard fbdev interface 37 * X servers and other applications can use the standard fbdev interface
38 from user mode to talk to the device, without needing to know anything 38 from user mode to talk to the device, without needing to know anything
39 about USB or DisplayLink's protocol at all. A "displaylink" X driver 39 about USB or DisplayLink's protocol at all. A "displaylink" X driver
40 and a slightly modified "fbdev" X driver are among those that already do. 40 and a slightly modified "fbdev" X driver are among those that already do.
41 41
42 Disadvantages: 42 Disadvantages:
43 43
44 * Fbdev's mmap interface assumes a real hardware framebuffer is mapped. 44 * Fbdev's mmap interface assumes a real hardware framebuffer is mapped.
45 In the case of USB graphics, it is just an allocated (virtual) buffer. 45 In the case of USB graphics, it is just an allocated (virtual) buffer.
46 Writes need to be detected and encoded into USB bulk transfers by the CPU. 46 Writes need to be detected and encoded into USB bulk transfers by the CPU.
47 Accurate damage/changed area notifications work around this problem. 47 Accurate damage/changed area notifications work around this problem.
48 In the future, hopefully fbdev will be enhanced with an small standard 48 In the future, hopefully fbdev will be enhanced with an small standard
49 interface to allow mmap clients to report damage, for the benefit 49 interface to allow mmap clients to report damage, for the benefit
50 of virtual or remote framebuffers. 50 of virtual or remote framebuffers.
51 * Fbdev does not arbitrate client ownership of the framebuffer well. 51 * Fbdev does not arbitrate client ownership of the framebuffer well.
52 * Fbcon assumes the first framebuffer it finds should be consumed for console. 52 * Fbcon assumes the first framebuffer it finds should be consumed for console.
53 * It's not clear what the future of fbdev is, given the rise of KMS/DRM. 53 * It's not clear what the future of fbdev is, given the rise of KMS/DRM.
54 54
55 How to use it? 55 How to use it?
56 ============== 56 ==============
57 57
58 Udlfb, when loaded as a module, will match against all USB 2.0 generation 58 Udlfb, when loaded as a module, will match against all USB 2.0 generation
59 DisplayLink chips (Alex and Ollie family). It will then attempt to read the EDID 59 DisplayLink chips (Alex and Ollie family). It will then attempt to read the EDID
60 of the monitor, and set the best common mode between the DisplayLink device 60 of the monitor, and set the best common mode between the DisplayLink device
61 and the monitor's capabilities. 61 and the monitor's capabilities.
62 62
63 If the DisplayLink device is successful, it will paint a "green screen" which 63 If the DisplayLink device is successful, it will paint a "green screen" which
64 means that from a hardware and fbdev software perspective, everything is good. 64 means that from a hardware and fbdev software perspective, everything is good.
65 65
66 At that point, a /dev/fb? interface will be present for user-mode applications 66 At that point, a /dev/fb? interface will be present for user-mode applications
67 to open and begin writing to the framebuffer of the DisplayLink device using 67 to open and begin writing to the framebuffer of the DisplayLink device using
68 standard fbdev calls. Note that if mmap() is used, by default the user mode 68 standard fbdev calls. Note that if mmap() is used, by default the user mode
69 application must send down damage notifcations to trigger repaints of the 69 application must send down damage notifcations to trigger repaints of the
70 changed regions. Alternatively, udlfb can be recompiled with experimental 70 changed regions. Alternatively, udlfb can be recompiled with experimental
71 defio support enabled, to support a page-fault based detection mechanism 71 defio support enabled, to support a page-fault based detection mechanism
72 that can work without explicit notifcation. 72 that can work without explicit notifcation.
73 73
74 The most common client of udlfb is xf86-video-displaylink or a modified 74 The most common client of udlfb is xf86-video-displaylink or a modified
75 xf86-video-fbdev X server. These servers have no real DisplayLink specific 75 xf86-video-fbdev X server. These servers have no real DisplayLink specific
76 code. They write to the standard framebuffer interface and rely on udlfb 76 code. They write to the standard framebuffer interface and rely on udlfb
77 to do its thing. The one extra feature they have is the ability to report 77 to do its thing. The one extra feature they have is the ability to report
78 rectangles from the X DAMAGE protocol extension down to udlfb via udlfb's 78 rectangles from the X DAMAGE protocol extension down to udlfb via udlfb's
79 damage interface (which will hopefully be standardized for all virtual 79 damage interface (which will hopefully be standardized for all virtual
80 framebuffers that need damage info). These damage notifications allow 80 framebuffers that need damage info). These damage notifications allow
81 udlfb to efficiently process the changed pixels. 81 udlfb to efficiently process the changed pixels.
82 82
83 Module Options 83 Module Options
84 ============== 84 ==============
85 85
86 Special configuration for udlfb is usually unnecessary. There are a few 86 Special configuration for udlfb is usually unnecessary. There are a few
87 options, however. 87 options, however.
88 88
89 From the command line, pass options to modprobe 89 From the command line, pass options to modprobe
90 modprobe udlfb defio=1 console=1 90 modprobe udlfb defio=1 console=1
91 91
92 Or for permanent option, create file like /etc/modprobe.d/options with text 92 Or for permanent option, create file like /etc/modprobe.d/options with text
93 options udlfb defio=1 console=1 93 options udlfb defio=1 console=1
94 94
95 Accepted options: 95 Accepted options:
96 96
97 fb_defio Make use of the fb_defio (CONFIG_FB_DEFERRED_IO) kernel 97 fb_defio Make use of the fb_defio (CONFIG_FB_DEFERRED_IO) kernel
98 module to track changed areas of the framebuffer by page faults. 98 module to track changed areas of the framebuffer by page faults.
99 Standard fbdev applications that use mmap but that do not 99 Standard fbdev applications that use mmap but that do not
100 report damage, may be able to work with this enabled. 100 report damage, may be able to work with this enabled.
101 Disabled by default because of overhead and other issues. 101 Disabled by default because of overhead and other issues.
102 102
103 console Allow fbcon to attach to udlfb provided framebuffers. This 103 console Allow fbcon to attach to udlfb provided framebuffers. This
104 is disabled by default because fbcon will aggressively consume 104 is disabled by default because fbcon will aggressively consume
105 the first framebuffer it finds, which isn't usually what the 105 the first framebuffer it finds, which isn't usually what the
106 user wants in the case of USB displays. 106 user wants in the case of USB displays.
107 107
108 shadow Allocate a 2nd framebuffer to shadow what's currently across
109 the USB bus in device memory. If any pixels are unchanged,
110 do not transmit. Spends host memory to save USB transfers.
111 Enabled by default. Only disable on very low memory systems.
112
108 Sysfs Attributes 113 Sysfs Attributes
109 ================ 114 ================
110 115
111 Udlfb creates several files in /sys/class/graphics/fb? 116 Udlfb creates several files in /sys/class/graphics/fb?
112 Where ? is the sequential framebuffer id of the particular DisplayLink device 117 Where ? is the sequential framebuffer id of the particular DisplayLink device
113 118
114 edid If a valid EDID blob is written to this file (typically 119 edid If a valid EDID blob is written to this file (typically
115 by a udev rule), then udlfb will use this EDID as a 120 by a udev rule), then udlfb will use this EDID as a
116 backup in case reading the actual EDID of the monitor 121 backup in case reading the actual EDID of the monitor
117 attached to the DisplayLink device fails. This is 122 attached to the DisplayLink device fails. This is
118 especially useful for fixed panels, etc. that cannot 123 especially useful for fixed panels, etc. that cannot
119 communicate their capabilities via EDID. Reading 124 communicate their capabilities via EDID. Reading
120 this file returns the current EDID of the attached 125 this file returns the current EDID of the attached
121 monitor (or last backup value written). This is 126 monitor (or last backup value written). This is
122 useful to get the EDID of the attached monitor, 127 useful to get the EDID of the attached monitor,
123 which can be passed to utilities like parse-edid. 128 which can be passed to utilities like parse-edid.
124 129
125 metrics_bytes_rendered 32-bit count of pixel bytes rendered 130 metrics_bytes_rendered 32-bit count of pixel bytes rendered
126 131
127 metrics_bytes_identical 32-bit count of how many of those bytes were found to be 132 metrics_bytes_identical 32-bit count of how many of those bytes were found to be
128 unchanged, based on a shadow framebuffer check 133 unchanged, based on a shadow framebuffer check
129 134
130 metrics_bytes_sent 32-bit count of how many bytes were transferred over 135 metrics_bytes_sent 32-bit count of how many bytes were transferred over
131 USB to communicate the resulting changed pixels to the 136 USB to communicate the resulting changed pixels to the
132 hardware. Includes compression and protocol overhead 137 hardware. Includes compression and protocol overhead
133 138
134 metrics_cpu_kcycles_used 32-bit count of CPU cycles used in processing the 139 metrics_cpu_kcycles_used 32-bit count of CPU cycles used in processing the
135 above pixels (in thousands of cycles). 140 above pixels (in thousands of cycles).
136 141
137 metrics_reset Write-only. Any write to this file resets all metrics 142 metrics_reset Write-only. Any write to this file resets all metrics
138 above to zero. Note that the 32-bit counters above 143 above to zero. Note that the 32-bit counters above
139 roll over very quickly. To get reliable results, design 144 roll over very quickly. To get reliable results, design
140 performance tests to start and finish in a very short 145 performance tests to start and finish in a very short
141 period of time (one minute or less is safe). 146 period of time (one minute or less is safe).
142 147
143 -- 148 --
144 Bernie Thompson <bernie@plugable.com> 149 Bernie Thompson <bernie@plugable.com>
145 150
drivers/video/udlfb.c
1 /* 1 /*
2 * udlfb.c -- Framebuffer driver for DisplayLink USB controller 2 * udlfb.c -- Framebuffer driver for DisplayLink USB controller
3 * 3 *
4 * Copyright (C) 2009 Roberto De Ioris <roberto@unbit.it> 4 * Copyright (C) 2009 Roberto De Ioris <roberto@unbit.it>
5 * Copyright (C) 2009 Jaya Kumar <jayakumar.lkml@gmail.com> 5 * Copyright (C) 2009 Jaya Kumar <jayakumar.lkml@gmail.com>
6 * Copyright (C) 2009 Bernie Thompson <bernie@plugable.com> 6 * Copyright (C) 2009 Bernie Thompson <bernie@plugable.com>
7 * 7 *
8 * This file is subject to the terms and conditions of the GNU General Public 8 * This file is subject to the terms and conditions of the GNU General Public
9 * License v2. See the file COPYING in the main directory of this archive for 9 * License v2. See the file COPYING in the main directory of this archive for
10 * more details. 10 * more details.
11 * 11 *
12 * Layout is based on skeletonfb by James Simmons and Geert Uytterhoeven, 12 * Layout is based on skeletonfb by James Simmons and Geert Uytterhoeven,
13 * usb-skeleton by GregKH. 13 * usb-skeleton by GregKH.
14 * 14 *
15 * Device-specific portions based on information from Displaylink, with work 15 * Device-specific portions based on information from Displaylink, with work
16 * from Florian Echtler, Henrik Bjerregaard Pedersen, and others. 16 * from Florian Echtler, Henrik Bjerregaard Pedersen, and others.
17 */ 17 */
18 18
19 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 19 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
20 20
21 #include <linux/module.h> 21 #include <linux/module.h>
22 #include <linux/kernel.h> 22 #include <linux/kernel.h>
23 #include <linux/init.h> 23 #include <linux/init.h>
24 #include <linux/usb.h> 24 #include <linux/usb.h>
25 #include <linux/uaccess.h> 25 #include <linux/uaccess.h>
26 #include <linux/mm.h> 26 #include <linux/mm.h>
27 #include <linux/fb.h> 27 #include <linux/fb.h>
28 #include <linux/vmalloc.h> 28 #include <linux/vmalloc.h>
29 #include <linux/slab.h> 29 #include <linux/slab.h>
30 #include <linux/prefetch.h> 30 #include <linux/prefetch.h>
31 #include <linux/delay.h> 31 #include <linux/delay.h>
32 #include <video/udlfb.h> 32 #include <video/udlfb.h>
33 #include "edid.h" 33 #include "edid.h"
34 34
35 static struct fb_fix_screeninfo dlfb_fix = { 35 static struct fb_fix_screeninfo dlfb_fix = {
36 .id = "udlfb", 36 .id = "udlfb",
37 .type = FB_TYPE_PACKED_PIXELS, 37 .type = FB_TYPE_PACKED_PIXELS,
38 .visual = FB_VISUAL_TRUECOLOR, 38 .visual = FB_VISUAL_TRUECOLOR,
39 .xpanstep = 0, 39 .xpanstep = 0,
40 .ypanstep = 0, 40 .ypanstep = 0,
41 .ywrapstep = 0, 41 .ywrapstep = 0,
42 .accel = FB_ACCEL_NONE, 42 .accel = FB_ACCEL_NONE,
43 }; 43 };
44 44
45 static const u32 udlfb_info_flags = FBINFO_DEFAULT | FBINFO_READS_FAST | 45 static const u32 udlfb_info_flags = FBINFO_DEFAULT | FBINFO_READS_FAST |
46 FBINFO_VIRTFB | 46 FBINFO_VIRTFB |
47 FBINFO_HWACCEL_IMAGEBLIT | FBINFO_HWACCEL_FILLRECT | 47 FBINFO_HWACCEL_IMAGEBLIT | FBINFO_HWACCEL_FILLRECT |
48 FBINFO_HWACCEL_COPYAREA | FBINFO_MISC_ALWAYS_SETPAR; 48 FBINFO_HWACCEL_COPYAREA | FBINFO_MISC_ALWAYS_SETPAR;
49 49
50 /* 50 /*
51 * There are many DisplayLink-based products, all with unique PIDs. We are able 51 * There are many DisplayLink-based products, all with unique PIDs. We are able
52 * to support all volume ones (circa 2009) with a single driver, so we match 52 * to support all volume ones (circa 2009) with a single driver, so we match
53 * globally on VID. TODO: Probe() needs to detect when we might be running 53 * globally on VID. TODO: Probe() needs to detect when we might be running
54 * "future" chips, and bail on those, so a compatible driver can match. 54 * "future" chips, and bail on those, so a compatible driver can match.
55 */ 55 */
56 static struct usb_device_id id_table[] = { 56 static struct usb_device_id id_table[] = {
57 {.idVendor = 0x17e9, .match_flags = USB_DEVICE_ID_MATCH_VENDOR,}, 57 {.idVendor = 0x17e9, .match_flags = USB_DEVICE_ID_MATCH_VENDOR,},
58 {}, 58 {},
59 }; 59 };
60 MODULE_DEVICE_TABLE(usb, id_table); 60 MODULE_DEVICE_TABLE(usb, id_table);
61 61
62 /* module options */ 62 /* module options */
63 static int console; /* Optionally allow fbcon to consume first framebuffer */ 63 static int console; /* Optionally allow fbcon to consume first framebuffer */
64 static int fb_defio; /* Optionally enable experimental fb_defio mmap support */ 64 static int fb_defio; /* Optionally enable experimental fb_defio mmap support */
65 static int shadow = 1; /* Optionally disable shadow framebuffer */
65 66
66 /* dlfb keeps a list of urbs for efficient bulk transfers */ 67 /* dlfb keeps a list of urbs for efficient bulk transfers */
67 static void dlfb_urb_completion(struct urb *urb); 68 static void dlfb_urb_completion(struct urb *urb);
68 static struct urb *dlfb_get_urb(struct dlfb_data *dev); 69 static struct urb *dlfb_get_urb(struct dlfb_data *dev);
69 static int dlfb_submit_urb(struct dlfb_data *dev, struct urb * urb, size_t len); 70 static int dlfb_submit_urb(struct dlfb_data *dev, struct urb * urb, size_t len);
70 static int dlfb_alloc_urb_list(struct dlfb_data *dev, int count, size_t size); 71 static int dlfb_alloc_urb_list(struct dlfb_data *dev, int count, size_t size);
71 static void dlfb_free_urb_list(struct dlfb_data *dev); 72 static void dlfb_free_urb_list(struct dlfb_data *dev);
72 73
73 /* 74 /*
74 * All DisplayLink bulk operations start with 0xAF, followed by specific code 75 * All DisplayLink bulk operations start with 0xAF, followed by specific code
75 * All operations are written to buffers which then later get sent to device 76 * All operations are written to buffers which then later get sent to device
76 */ 77 */
77 static char *dlfb_set_register(char *buf, u8 reg, u8 val) 78 static char *dlfb_set_register(char *buf, u8 reg, u8 val)
78 { 79 {
79 *buf++ = 0xAF; 80 *buf++ = 0xAF;
80 *buf++ = 0x20; 81 *buf++ = 0x20;
81 *buf++ = reg; 82 *buf++ = reg;
82 *buf++ = val; 83 *buf++ = val;
83 return buf; 84 return buf;
84 } 85 }
85 86
86 static char *dlfb_vidreg_lock(char *buf) 87 static char *dlfb_vidreg_lock(char *buf)
87 { 88 {
88 return dlfb_set_register(buf, 0xFF, 0x00); 89 return dlfb_set_register(buf, 0xFF, 0x00);
89 } 90 }
90 91
91 static char *dlfb_vidreg_unlock(char *buf) 92 static char *dlfb_vidreg_unlock(char *buf)
92 { 93 {
93 return dlfb_set_register(buf, 0xFF, 0xFF); 94 return dlfb_set_register(buf, 0xFF, 0xFF);
94 } 95 }
95 96
96 /* 97 /*
97 * Map FB_BLANK_* to DisplayLink register 98 * Map FB_BLANK_* to DisplayLink register
98 * DLReg FB_BLANK_* 99 * DLReg FB_BLANK_*
99 * ----- ----------------------------- 100 * ----- -----------------------------
100 * 0x00 FB_BLANK_UNBLANK (0) 101 * 0x00 FB_BLANK_UNBLANK (0)
101 * 0x01 FB_BLANK (1) 102 * 0x01 FB_BLANK (1)
102 * 0x03 FB_BLANK_VSYNC_SUSPEND (2) 103 * 0x03 FB_BLANK_VSYNC_SUSPEND (2)
103 * 0x05 FB_BLANK_HSYNC_SUSPEND (3) 104 * 0x05 FB_BLANK_HSYNC_SUSPEND (3)
104 * 0x07 FB_BLANK_POWERDOWN (4) Note: requires modeset to come back 105 * 0x07 FB_BLANK_POWERDOWN (4) Note: requires modeset to come back
105 */ 106 */
106 static char *dlfb_blanking(char *buf, int fb_blank) 107 static char *dlfb_blanking(char *buf, int fb_blank)
107 { 108 {
108 u8 reg; 109 u8 reg;
109 110
110 switch (fb_blank) { 111 switch (fb_blank) {
111 case FB_BLANK_POWERDOWN: 112 case FB_BLANK_POWERDOWN:
112 reg = 0x07; 113 reg = 0x07;
113 break; 114 break;
114 case FB_BLANK_HSYNC_SUSPEND: 115 case FB_BLANK_HSYNC_SUSPEND:
115 reg = 0x05; 116 reg = 0x05;
116 break; 117 break;
117 case FB_BLANK_VSYNC_SUSPEND: 118 case FB_BLANK_VSYNC_SUSPEND:
118 reg = 0x03; 119 reg = 0x03;
119 break; 120 break;
120 case FB_BLANK_NORMAL: 121 case FB_BLANK_NORMAL:
121 reg = 0x01; 122 reg = 0x01;
122 break; 123 break;
123 default: 124 default:
124 reg = 0x00; 125 reg = 0x00;
125 } 126 }
126 127
127 buf = dlfb_set_register(buf, 0x1F, reg); 128 buf = dlfb_set_register(buf, 0x1F, reg);
128 129
129 return buf; 130 return buf;
130 } 131 }
131 132
132 static char *dlfb_set_color_depth(char *buf, u8 selection) 133 static char *dlfb_set_color_depth(char *buf, u8 selection)
133 { 134 {
134 return dlfb_set_register(buf, 0x00, selection); 135 return dlfb_set_register(buf, 0x00, selection);
135 } 136 }
136 137
137 static char *dlfb_set_base16bpp(char *wrptr, u32 base) 138 static char *dlfb_set_base16bpp(char *wrptr, u32 base)
138 { 139 {
139 /* the base pointer is 16 bits wide, 0x20 is hi byte. */ 140 /* the base pointer is 16 bits wide, 0x20 is hi byte. */
140 wrptr = dlfb_set_register(wrptr, 0x20, base >> 16); 141 wrptr = dlfb_set_register(wrptr, 0x20, base >> 16);
141 wrptr = dlfb_set_register(wrptr, 0x21, base >> 8); 142 wrptr = dlfb_set_register(wrptr, 0x21, base >> 8);
142 return dlfb_set_register(wrptr, 0x22, base); 143 return dlfb_set_register(wrptr, 0x22, base);
143 } 144 }
144 145
145 /* 146 /*
146 * DisplayLink HW has separate 16bpp and 8bpp framebuffers. 147 * DisplayLink HW has separate 16bpp and 8bpp framebuffers.
147 * In 24bpp modes, the low 323 RGB bits go in the 8bpp framebuffer 148 * In 24bpp modes, the low 323 RGB bits go in the 8bpp framebuffer
148 */ 149 */
149 static char *dlfb_set_base8bpp(char *wrptr, u32 base) 150 static char *dlfb_set_base8bpp(char *wrptr, u32 base)
150 { 151 {
151 wrptr = dlfb_set_register(wrptr, 0x26, base >> 16); 152 wrptr = dlfb_set_register(wrptr, 0x26, base >> 16);
152 wrptr = dlfb_set_register(wrptr, 0x27, base >> 8); 153 wrptr = dlfb_set_register(wrptr, 0x27, base >> 8);
153 return dlfb_set_register(wrptr, 0x28, base); 154 return dlfb_set_register(wrptr, 0x28, base);
154 } 155 }
155 156
156 static char *dlfb_set_register_16(char *wrptr, u8 reg, u16 value) 157 static char *dlfb_set_register_16(char *wrptr, u8 reg, u16 value)
157 { 158 {
158 wrptr = dlfb_set_register(wrptr, reg, value >> 8); 159 wrptr = dlfb_set_register(wrptr, reg, value >> 8);
159 return dlfb_set_register(wrptr, reg+1, value); 160 return dlfb_set_register(wrptr, reg+1, value);
160 } 161 }
161 162
162 /* 163 /*
163 * This is kind of weird because the controller takes some 164 * This is kind of weird because the controller takes some
164 * register values in a different byte order than other registers. 165 * register values in a different byte order than other registers.
165 */ 166 */
166 static char *dlfb_set_register_16be(char *wrptr, u8 reg, u16 value) 167 static char *dlfb_set_register_16be(char *wrptr, u8 reg, u16 value)
167 { 168 {
168 wrptr = dlfb_set_register(wrptr, reg, value); 169 wrptr = dlfb_set_register(wrptr, reg, value);
169 return dlfb_set_register(wrptr, reg+1, value >> 8); 170 return dlfb_set_register(wrptr, reg+1, value >> 8);
170 } 171 }
171 172
172 /* 173 /*
173 * LFSR is linear feedback shift register. The reason we have this is 174 * LFSR is linear feedback shift register. The reason we have this is
174 * because the display controller needs to minimize the clock depth of 175 * because the display controller needs to minimize the clock depth of
175 * various counters used in the display path. So this code reverses the 176 * various counters used in the display path. So this code reverses the
176 * provided value into the lfsr16 value by counting backwards to get 177 * provided value into the lfsr16 value by counting backwards to get
177 * the value that needs to be set in the hardware comparator to get the 178 * the value that needs to be set in the hardware comparator to get the
178 * same actual count. This makes sense once you read above a couple of 179 * same actual count. This makes sense once you read above a couple of
179 * times and think about it from a hardware perspective. 180 * times and think about it from a hardware perspective.
180 */ 181 */
181 static u16 dlfb_lfsr16(u16 actual_count) 182 static u16 dlfb_lfsr16(u16 actual_count)
182 { 183 {
183 u32 lv = 0xFFFF; /* This is the lfsr value that the hw starts with */ 184 u32 lv = 0xFFFF; /* This is the lfsr value that the hw starts with */
184 185
185 while (actual_count--) { 186 while (actual_count--) {
186 lv = ((lv << 1) | 187 lv = ((lv << 1) |
187 (((lv >> 15) ^ (lv >> 4) ^ (lv >> 2) ^ (lv >> 1)) & 1)) 188 (((lv >> 15) ^ (lv >> 4) ^ (lv >> 2) ^ (lv >> 1)) & 1))
188 & 0xFFFF; 189 & 0xFFFF;
189 } 190 }
190 191
191 return (u16) lv; 192 return (u16) lv;
192 } 193 }
193 194
194 /* 195 /*
195 * This does LFSR conversion on the value that is to be written. 196 * This does LFSR conversion on the value that is to be written.
196 * See LFSR explanation above for more detail. 197 * See LFSR explanation above for more detail.
197 */ 198 */
198 static char *dlfb_set_register_lfsr16(char *wrptr, u8 reg, u16 value) 199 static char *dlfb_set_register_lfsr16(char *wrptr, u8 reg, u16 value)
199 { 200 {
200 return dlfb_set_register_16(wrptr, reg, dlfb_lfsr16(value)); 201 return dlfb_set_register_16(wrptr, reg, dlfb_lfsr16(value));
201 } 202 }
202 203
203 /* 204 /*
204 * This takes a standard fbdev screeninfo struct and all of its monitor mode 205 * This takes a standard fbdev screeninfo struct and all of its monitor mode
205 * details and converts them into the DisplayLink equivalent register commands. 206 * details and converts them into the DisplayLink equivalent register commands.
206 */ 207 */
207 static char *dlfb_set_vid_cmds(char *wrptr, struct fb_var_screeninfo *var) 208 static char *dlfb_set_vid_cmds(char *wrptr, struct fb_var_screeninfo *var)
208 { 209 {
209 u16 xds, yds; 210 u16 xds, yds;
210 u16 xde, yde; 211 u16 xde, yde;
211 u16 yec; 212 u16 yec;
212 213
213 /* x display start */ 214 /* x display start */
214 xds = var->left_margin + var->hsync_len; 215 xds = var->left_margin + var->hsync_len;
215 wrptr = dlfb_set_register_lfsr16(wrptr, 0x01, xds); 216 wrptr = dlfb_set_register_lfsr16(wrptr, 0x01, xds);
216 /* x display end */ 217 /* x display end */
217 xde = xds + var->xres; 218 xde = xds + var->xres;
218 wrptr = dlfb_set_register_lfsr16(wrptr, 0x03, xde); 219 wrptr = dlfb_set_register_lfsr16(wrptr, 0x03, xde);
219 220
220 /* y display start */ 221 /* y display start */
221 yds = var->upper_margin + var->vsync_len; 222 yds = var->upper_margin + var->vsync_len;
222 wrptr = dlfb_set_register_lfsr16(wrptr, 0x05, yds); 223 wrptr = dlfb_set_register_lfsr16(wrptr, 0x05, yds);
223 /* y display end */ 224 /* y display end */
224 yde = yds + var->yres; 225 yde = yds + var->yres;
225 wrptr = dlfb_set_register_lfsr16(wrptr, 0x07, yde); 226 wrptr = dlfb_set_register_lfsr16(wrptr, 0x07, yde);
226 227
227 /* x end count is active + blanking - 1 */ 228 /* x end count is active + blanking - 1 */
228 wrptr = dlfb_set_register_lfsr16(wrptr, 0x09, 229 wrptr = dlfb_set_register_lfsr16(wrptr, 0x09,
229 xde + var->right_margin - 1); 230 xde + var->right_margin - 1);
230 231
231 /* libdlo hardcodes hsync start to 1 */ 232 /* libdlo hardcodes hsync start to 1 */
232 wrptr = dlfb_set_register_lfsr16(wrptr, 0x0B, 1); 233 wrptr = dlfb_set_register_lfsr16(wrptr, 0x0B, 1);
233 234
234 /* hsync end is width of sync pulse + 1 */ 235 /* hsync end is width of sync pulse + 1 */
235 wrptr = dlfb_set_register_lfsr16(wrptr, 0x0D, var->hsync_len + 1); 236 wrptr = dlfb_set_register_lfsr16(wrptr, 0x0D, var->hsync_len + 1);
236 237
237 /* hpixels is active pixels */ 238 /* hpixels is active pixels */
238 wrptr = dlfb_set_register_16(wrptr, 0x0F, var->xres); 239 wrptr = dlfb_set_register_16(wrptr, 0x0F, var->xres);
239 240
240 /* yendcount is vertical active + vertical blanking */ 241 /* yendcount is vertical active + vertical blanking */
241 yec = var->yres + var->upper_margin + var->lower_margin + 242 yec = var->yres + var->upper_margin + var->lower_margin +
242 var->vsync_len; 243 var->vsync_len;
243 wrptr = dlfb_set_register_lfsr16(wrptr, 0x11, yec); 244 wrptr = dlfb_set_register_lfsr16(wrptr, 0x11, yec);
244 245
245 /* libdlo hardcodes vsync start to 0 */ 246 /* libdlo hardcodes vsync start to 0 */
246 wrptr = dlfb_set_register_lfsr16(wrptr, 0x13, 0); 247 wrptr = dlfb_set_register_lfsr16(wrptr, 0x13, 0);
247 248
248 /* vsync end is width of vsync pulse */ 249 /* vsync end is width of vsync pulse */
249 wrptr = dlfb_set_register_lfsr16(wrptr, 0x15, var->vsync_len); 250 wrptr = dlfb_set_register_lfsr16(wrptr, 0x15, var->vsync_len);
250 251
251 /* vpixels is active pixels */ 252 /* vpixels is active pixels */
252 wrptr = dlfb_set_register_16(wrptr, 0x17, var->yres); 253 wrptr = dlfb_set_register_16(wrptr, 0x17, var->yres);
253 254
254 /* convert picoseconds to 5kHz multiple for pclk5k = x * 1E12/5k */ 255 /* convert picoseconds to 5kHz multiple for pclk5k = x * 1E12/5k */
255 wrptr = dlfb_set_register_16be(wrptr, 0x1B, 256 wrptr = dlfb_set_register_16be(wrptr, 0x1B,
256 200*1000*1000/var->pixclock); 257 200*1000*1000/var->pixclock);
257 258
258 return wrptr; 259 return wrptr;
259 } 260 }
260 261
261 /* 262 /*
262 * This takes a standard fbdev screeninfo struct that was fetched or prepared 263 * This takes a standard fbdev screeninfo struct that was fetched or prepared
263 * and then generates the appropriate command sequence that then drives the 264 * and then generates the appropriate command sequence that then drives the
264 * display controller. 265 * display controller.
265 */ 266 */
266 static int dlfb_set_video_mode(struct dlfb_data *dev, 267 static int dlfb_set_video_mode(struct dlfb_data *dev,
267 struct fb_var_screeninfo *var) 268 struct fb_var_screeninfo *var)
268 { 269 {
269 char *buf; 270 char *buf;
270 char *wrptr; 271 char *wrptr;
271 int retval = 0; 272 int retval = 0;
272 int writesize; 273 int writesize;
273 struct urb *urb; 274 struct urb *urb;
274 275
275 if (!atomic_read(&dev->usb_active)) 276 if (!atomic_read(&dev->usb_active))
276 return -EPERM; 277 return -EPERM;
277 278
278 urb = dlfb_get_urb(dev); 279 urb = dlfb_get_urb(dev);
279 if (!urb) 280 if (!urb)
280 return -ENOMEM; 281 return -ENOMEM;
281 282
282 buf = (char *) urb->transfer_buffer; 283 buf = (char *) urb->transfer_buffer;
283 284
284 /* 285 /*
285 * This first section has to do with setting the base address on the 286 * This first section has to do with setting the base address on the
286 * controller * associated with the display. There are 2 base 287 * controller * associated with the display. There are 2 base
287 * pointers, currently, we only * use the 16 bpp segment. 288 * pointers, currently, we only * use the 16 bpp segment.
288 */ 289 */
289 wrptr = dlfb_vidreg_lock(buf); 290 wrptr = dlfb_vidreg_lock(buf);
290 wrptr = dlfb_set_color_depth(wrptr, 0x00); 291 wrptr = dlfb_set_color_depth(wrptr, 0x00);
291 /* set base for 16bpp segment to 0 */ 292 /* set base for 16bpp segment to 0 */
292 wrptr = dlfb_set_base16bpp(wrptr, 0); 293 wrptr = dlfb_set_base16bpp(wrptr, 0);
293 /* set base for 8bpp segment to end of fb */ 294 /* set base for 8bpp segment to end of fb */
294 wrptr = dlfb_set_base8bpp(wrptr, dev->info->fix.smem_len); 295 wrptr = dlfb_set_base8bpp(wrptr, dev->info->fix.smem_len);
295 296
296 wrptr = dlfb_set_vid_cmds(wrptr, var); 297 wrptr = dlfb_set_vid_cmds(wrptr, var);
297 wrptr = dlfb_blanking(wrptr, FB_BLANK_UNBLANK); 298 wrptr = dlfb_blanking(wrptr, FB_BLANK_UNBLANK);
298 wrptr = dlfb_vidreg_unlock(wrptr); 299 wrptr = dlfb_vidreg_unlock(wrptr);
299 300
300 writesize = wrptr - buf; 301 writesize = wrptr - buf;
301 302
302 retval = dlfb_submit_urb(dev, urb, writesize); 303 retval = dlfb_submit_urb(dev, urb, writesize);
303 304
304 dev->blank_mode = FB_BLANK_UNBLANK; 305 dev->blank_mode = FB_BLANK_UNBLANK;
305 306
306 return retval; 307 return retval;
307 } 308 }
308 309
309 static int dlfb_ops_mmap(struct fb_info *info, struct vm_area_struct *vma) 310 static int dlfb_ops_mmap(struct fb_info *info, struct vm_area_struct *vma)
310 { 311 {
311 unsigned long start = vma->vm_start; 312 unsigned long start = vma->vm_start;
312 unsigned long size = vma->vm_end - vma->vm_start; 313 unsigned long size = vma->vm_end - vma->vm_start;
313 unsigned long offset = vma->vm_pgoff << PAGE_SHIFT; 314 unsigned long offset = vma->vm_pgoff << PAGE_SHIFT;
314 unsigned long page, pos; 315 unsigned long page, pos;
315 316
316 if (offset + size > info->fix.smem_len) 317 if (offset + size > info->fix.smem_len)
317 return -EINVAL; 318 return -EINVAL;
318 319
319 pos = (unsigned long)info->fix.smem_start + offset; 320 pos = (unsigned long)info->fix.smem_start + offset;
320 321
321 pr_notice("mmap() framebuffer addr:%lu size:%lu\n", 322 pr_notice("mmap() framebuffer addr:%lu size:%lu\n",
322 pos, size); 323 pos, size);
323 324
324 while (size > 0) { 325 while (size > 0) {
325 page = vmalloc_to_pfn((void *)pos); 326 page = vmalloc_to_pfn((void *)pos);
326 if (remap_pfn_range(vma, start, page, PAGE_SIZE, PAGE_SHARED)) 327 if (remap_pfn_range(vma, start, page, PAGE_SIZE, PAGE_SHARED))
327 return -EAGAIN; 328 return -EAGAIN;
328 329
329 start += PAGE_SIZE; 330 start += PAGE_SIZE;
330 pos += PAGE_SIZE; 331 pos += PAGE_SIZE;
331 if (size > PAGE_SIZE) 332 if (size > PAGE_SIZE)
332 size -= PAGE_SIZE; 333 size -= PAGE_SIZE;
333 else 334 else
334 size = 0; 335 size = 0;
335 } 336 }
336 337
337 vma->vm_flags |= VM_RESERVED; /* avoid to swap out this VMA */ 338 vma->vm_flags |= VM_RESERVED; /* avoid to swap out this VMA */
338 return 0; 339 return 0;
339 } 340 }
340 341
341 /* 342 /*
342 * Trims identical data from front and back of line 343 * Trims identical data from front and back of line
343 * Sets new front buffer address and width 344 * Sets new front buffer address and width
344 * And returns byte count of identical pixels 345 * And returns byte count of identical pixels
345 * Assumes CPU natural alignment (unsigned long) 346 * Assumes CPU natural alignment (unsigned long)
346 * for back and front buffer ptrs and width 347 * for back and front buffer ptrs and width
347 */ 348 */
348 static int dlfb_trim_hline(const u8 *bback, const u8 **bfront, int *width_bytes) 349 static int dlfb_trim_hline(const u8 *bback, const u8 **bfront, int *width_bytes)
349 { 350 {
350 int j, k; 351 int j, k;
351 const unsigned long *back = (const unsigned long *) bback; 352 const unsigned long *back = (const unsigned long *) bback;
352 const unsigned long *front = (const unsigned long *) *bfront; 353 const unsigned long *front = (const unsigned long *) *bfront;
353 const int width = *width_bytes / sizeof(unsigned long); 354 const int width = *width_bytes / sizeof(unsigned long);
354 int identical = width; 355 int identical = width;
355 int start = width; 356 int start = width;
356 int end = width; 357 int end = width;
357 358
358 prefetch((void *) front); 359 prefetch((void *) front);
359 prefetch((void *) back); 360 prefetch((void *) back);
360 361
361 for (j = 0; j < width; j++) { 362 for (j = 0; j < width; j++) {
362 if (back[j] != front[j]) { 363 if (back[j] != front[j]) {
363 start = j; 364 start = j;
364 break; 365 break;
365 } 366 }
366 } 367 }
367 368
368 for (k = width - 1; k > j; k--) { 369 for (k = width - 1; k > j; k--) {
369 if (back[k] != front[k]) { 370 if (back[k] != front[k]) {
370 end = k+1; 371 end = k+1;
371 break; 372 break;
372 } 373 }
373 } 374 }
374 375
375 identical = start + (width - end); 376 identical = start + (width - end);
376 *bfront = (u8 *) &front[start]; 377 *bfront = (u8 *) &front[start];
377 *width_bytes = (end - start) * sizeof(unsigned long); 378 *width_bytes = (end - start) * sizeof(unsigned long);
378 379
379 return identical * sizeof(unsigned long); 380 return identical * sizeof(unsigned long);
380 } 381 }
381 382
382 /* 383 /*
383 * Render a command stream for an encoded horizontal line segment of pixels. 384 * Render a command stream for an encoded horizontal line segment of pixels.
384 * 385 *
385 * A command buffer holds several commands. 386 * A command buffer holds several commands.
386 * It always begins with a fresh command header 387 * It always begins with a fresh command header
387 * (the protocol doesn't require this, but we enforce it to allow 388 * (the protocol doesn't require this, but we enforce it to allow
388 * multiple buffers to be potentially encoded and sent in parallel). 389 * multiple buffers to be potentially encoded and sent in parallel).
389 * A single command encodes one contiguous horizontal line of pixels 390 * A single command encodes one contiguous horizontal line of pixels
390 * 391 *
391 * The function relies on the client to do all allocation, so that 392 * The function relies on the client to do all allocation, so that
392 * rendering can be done directly to output buffers (e.g. USB URBs). 393 * rendering can be done directly to output buffers (e.g. USB URBs).
393 * The function fills the supplied command buffer, providing information 394 * The function fills the supplied command buffer, providing information
394 * on where it left off, so the client may call in again with additional 395 * on where it left off, so the client may call in again with additional
395 * buffers if the line will take several buffers to complete. 396 * buffers if the line will take several buffers to complete.
396 * 397 *
397 * A single command can transmit a maximum of 256 pixels, 398 * A single command can transmit a maximum of 256 pixels,
398 * regardless of the compression ratio (protocol design limit). 399 * regardless of the compression ratio (protocol design limit).
399 * To the hardware, 0 for a size byte means 256 400 * To the hardware, 0 for a size byte means 256
400 * 401 *
401 * Rather than 256 pixel commands which are either rl or raw encoded, 402 * Rather than 256 pixel commands which are either rl or raw encoded,
402 * the rlx command simply assumes alternating raw and rl spans within one cmd. 403 * the rlx command simply assumes alternating raw and rl spans within one cmd.
403 * This has a slightly larger header overhead, but produces more even results. 404 * This has a slightly larger header overhead, but produces more even results.
404 * It also processes all data (read and write) in a single pass. 405 * It also processes all data (read and write) in a single pass.
405 * Performance benchmarks of common cases show it having just slightly better 406 * Performance benchmarks of common cases show it having just slightly better
406 * compression than 256 pixel raw or rle commands, with similar CPU consumpion. 407 * compression than 256 pixel raw or rle commands, with similar CPU consumpion.
407 * But for very rl friendly data, will compress not quite as well. 408 * But for very rl friendly data, will compress not quite as well.
408 */ 409 */
409 static void dlfb_compress_hline( 410 static void dlfb_compress_hline(
410 const uint16_t **pixel_start_ptr, 411 const uint16_t **pixel_start_ptr,
411 const uint16_t *const pixel_end, 412 const uint16_t *const pixel_end,
412 uint32_t *device_address_ptr, 413 uint32_t *device_address_ptr,
413 uint8_t **command_buffer_ptr, 414 uint8_t **command_buffer_ptr,
414 const uint8_t *const cmd_buffer_end) 415 const uint8_t *const cmd_buffer_end)
415 { 416 {
416 const uint16_t *pixel = *pixel_start_ptr; 417 const uint16_t *pixel = *pixel_start_ptr;
417 uint32_t dev_addr = *device_address_ptr; 418 uint32_t dev_addr = *device_address_ptr;
418 uint8_t *cmd = *command_buffer_ptr; 419 uint8_t *cmd = *command_buffer_ptr;
419 const int bpp = 2; 420 const int bpp = 2;
420 421
421 while ((pixel_end > pixel) && 422 while ((pixel_end > pixel) &&
422 (cmd_buffer_end - MIN_RLX_CMD_BYTES > cmd)) { 423 (cmd_buffer_end - MIN_RLX_CMD_BYTES > cmd)) {
423 uint8_t *raw_pixels_count_byte = 0; 424 uint8_t *raw_pixels_count_byte = 0;
424 uint8_t *cmd_pixels_count_byte = 0; 425 uint8_t *cmd_pixels_count_byte = 0;
425 const uint16_t *raw_pixel_start = 0; 426 const uint16_t *raw_pixel_start = 0;
426 const uint16_t *cmd_pixel_start, *cmd_pixel_end = 0; 427 const uint16_t *cmd_pixel_start, *cmd_pixel_end = 0;
427 428
428 prefetchw((void *) cmd); /* pull in one cache line at least */ 429 prefetchw((void *) cmd); /* pull in one cache line at least */
429 430
430 *cmd++ = 0xAF; 431 *cmd++ = 0xAF;
431 *cmd++ = 0x6B; 432 *cmd++ = 0x6B;
432 *cmd++ = (uint8_t) ((dev_addr >> 16) & 0xFF); 433 *cmd++ = (uint8_t) ((dev_addr >> 16) & 0xFF);
433 *cmd++ = (uint8_t) ((dev_addr >> 8) & 0xFF); 434 *cmd++ = (uint8_t) ((dev_addr >> 8) & 0xFF);
434 *cmd++ = (uint8_t) ((dev_addr) & 0xFF); 435 *cmd++ = (uint8_t) ((dev_addr) & 0xFF);
435 436
436 cmd_pixels_count_byte = cmd++; /* we'll know this later */ 437 cmd_pixels_count_byte = cmd++; /* we'll know this later */
437 cmd_pixel_start = pixel; 438 cmd_pixel_start = pixel;
438 439
439 raw_pixels_count_byte = cmd++; /* we'll know this later */ 440 raw_pixels_count_byte = cmd++; /* we'll know this later */
440 raw_pixel_start = pixel; 441 raw_pixel_start = pixel;
441 442
442 cmd_pixel_end = pixel + min(MAX_CMD_PIXELS + 1, 443 cmd_pixel_end = pixel + min(MAX_CMD_PIXELS + 1,
443 min((int)(pixel_end - pixel), 444 min((int)(pixel_end - pixel),
444 (int)(cmd_buffer_end - cmd) / bpp)); 445 (int)(cmd_buffer_end - cmd) / bpp));
445 446
446 prefetch_range((void *) pixel, (cmd_pixel_end - pixel) * bpp); 447 prefetch_range((void *) pixel, (cmd_pixel_end - pixel) * bpp);
447 448
448 while (pixel < cmd_pixel_end) { 449 while (pixel < cmd_pixel_end) {
449 const uint16_t * const repeating_pixel = pixel; 450 const uint16_t * const repeating_pixel = pixel;
450 451
451 *(uint16_t *)cmd = cpu_to_be16p(pixel); 452 *(uint16_t *)cmd = cpu_to_be16p(pixel);
452 cmd += 2; 453 cmd += 2;
453 pixel++; 454 pixel++;
454 455
455 if (unlikely((pixel < cmd_pixel_end) && 456 if (unlikely((pixel < cmd_pixel_end) &&
456 (*pixel == *repeating_pixel))) { 457 (*pixel == *repeating_pixel))) {
457 /* go back and fill in raw pixel count */ 458 /* go back and fill in raw pixel count */
458 *raw_pixels_count_byte = ((repeating_pixel - 459 *raw_pixels_count_byte = ((repeating_pixel -
459 raw_pixel_start) + 1) & 0xFF; 460 raw_pixel_start) + 1) & 0xFF;
460 461
461 while ((pixel < cmd_pixel_end) 462 while ((pixel < cmd_pixel_end)
462 && (*pixel == *repeating_pixel)) { 463 && (*pixel == *repeating_pixel)) {
463 pixel++; 464 pixel++;
464 } 465 }
465 466
466 /* immediately after raw data is repeat byte */ 467 /* immediately after raw data is repeat byte */
467 *cmd++ = ((pixel - repeating_pixel) - 1) & 0xFF; 468 *cmd++ = ((pixel - repeating_pixel) - 1) & 0xFF;
468 469
469 /* Then start another raw pixel span */ 470 /* Then start another raw pixel span */
470 raw_pixel_start = pixel; 471 raw_pixel_start = pixel;
471 raw_pixels_count_byte = cmd++; 472 raw_pixels_count_byte = cmd++;
472 } 473 }
473 } 474 }
474 475
475 if (pixel > raw_pixel_start) { 476 if (pixel > raw_pixel_start) {
476 /* finalize last RAW span */ 477 /* finalize last RAW span */
477 *raw_pixels_count_byte = (pixel-raw_pixel_start) & 0xFF; 478 *raw_pixels_count_byte = (pixel-raw_pixel_start) & 0xFF;
478 } 479 }
479 480
480 *cmd_pixels_count_byte = (pixel - cmd_pixel_start) & 0xFF; 481 *cmd_pixels_count_byte = (pixel - cmd_pixel_start) & 0xFF;
481 dev_addr += (pixel - cmd_pixel_start) * bpp; 482 dev_addr += (pixel - cmd_pixel_start) * bpp;
482 } 483 }
483 484
484 if (cmd_buffer_end <= MIN_RLX_CMD_BYTES + cmd) { 485 if (cmd_buffer_end <= MIN_RLX_CMD_BYTES + cmd) {
485 /* Fill leftover bytes with no-ops */ 486 /* Fill leftover bytes with no-ops */
486 if (cmd_buffer_end > cmd) 487 if (cmd_buffer_end > cmd)
487 memset(cmd, 0xAF, cmd_buffer_end - cmd); 488 memset(cmd, 0xAF, cmd_buffer_end - cmd);
488 cmd = (uint8_t *) cmd_buffer_end; 489 cmd = (uint8_t *) cmd_buffer_end;
489 } 490 }
490 491
491 *command_buffer_ptr = cmd; 492 *command_buffer_ptr = cmd;
492 *pixel_start_ptr = pixel; 493 *pixel_start_ptr = pixel;
493 *device_address_ptr = dev_addr; 494 *device_address_ptr = dev_addr;
494 495
495 return; 496 return;
496 } 497 }
497 498
498 /* 499 /*
499 * There are 3 copies of every pixel: The front buffer that the fbdev 500 * There are 3 copies of every pixel: The front buffer that the fbdev
500 * client renders to, the actual framebuffer across the USB bus in hardware 501 * client renders to, the actual framebuffer across the USB bus in hardware
501 * (that we can only write to, slowly, and can never read), and (optionally) 502 * (that we can only write to, slowly, and can never read), and (optionally)
502 * our shadow copy that tracks what's been sent to that hardware buffer. 503 * our shadow copy that tracks what's been sent to that hardware buffer.
503 */ 504 */
504 static int dlfb_render_hline(struct dlfb_data *dev, struct urb **urb_ptr, 505 static int dlfb_render_hline(struct dlfb_data *dev, struct urb **urb_ptr,
505 const char *front, char **urb_buf_ptr, 506 const char *front, char **urb_buf_ptr,
506 u32 byte_offset, u32 byte_width, 507 u32 byte_offset, u32 byte_width,
507 int *ident_ptr, int *sent_ptr) 508 int *ident_ptr, int *sent_ptr)
508 { 509 {
509 const u8 *line_start, *line_end, *next_pixel; 510 const u8 *line_start, *line_end, *next_pixel;
510 u32 dev_addr = dev->base16 + byte_offset; 511 u32 dev_addr = dev->base16 + byte_offset;
511 struct urb *urb = *urb_ptr; 512 struct urb *urb = *urb_ptr;
512 u8 *cmd = *urb_buf_ptr; 513 u8 *cmd = *urb_buf_ptr;
513 u8 *cmd_end = (u8 *) urb->transfer_buffer + urb->transfer_buffer_length; 514 u8 *cmd_end = (u8 *) urb->transfer_buffer + urb->transfer_buffer_length;
514 515
515 line_start = (u8 *) (front + byte_offset); 516 line_start = (u8 *) (front + byte_offset);
516 next_pixel = line_start; 517 next_pixel = line_start;
517 line_end = next_pixel + byte_width; 518 line_end = next_pixel + byte_width;
518 519
519 if (dev->backing_buffer) { 520 if (dev->backing_buffer) {
520 int offset; 521 int offset;
521 const u8 *back_start = (u8 *) (dev->backing_buffer 522 const u8 *back_start = (u8 *) (dev->backing_buffer
522 + byte_offset); 523 + byte_offset);
523 524
524 *ident_ptr += dlfb_trim_hline(back_start, &next_pixel, 525 *ident_ptr += dlfb_trim_hline(back_start, &next_pixel,
525 &byte_width); 526 &byte_width);
526 527
527 offset = next_pixel - line_start; 528 offset = next_pixel - line_start;
528 line_end = next_pixel + byte_width; 529 line_end = next_pixel + byte_width;
529 dev_addr += offset; 530 dev_addr += offset;
530 back_start += offset; 531 back_start += offset;
531 line_start += offset; 532 line_start += offset;
532 533
533 memcpy((char *)back_start, (char *) line_start, 534 memcpy((char *)back_start, (char *) line_start,
534 byte_width); 535 byte_width);
535 } 536 }
536 537
537 while (next_pixel < line_end) { 538 while (next_pixel < line_end) {
538 539
539 dlfb_compress_hline((const uint16_t **) &next_pixel, 540 dlfb_compress_hline((const uint16_t **) &next_pixel,
540 (const uint16_t *) line_end, &dev_addr, 541 (const uint16_t *) line_end, &dev_addr,
541 (u8 **) &cmd, (u8 *) cmd_end); 542 (u8 **) &cmd, (u8 *) cmd_end);
542 543
543 if (cmd >= cmd_end) { 544 if (cmd >= cmd_end) {
544 int len = cmd - (u8 *) urb->transfer_buffer; 545 int len = cmd - (u8 *) urb->transfer_buffer;
545 if (dlfb_submit_urb(dev, urb, len)) 546 if (dlfb_submit_urb(dev, urb, len))
546 return 1; /* lost pixels is set */ 547 return 1; /* lost pixels is set */
547 *sent_ptr += len; 548 *sent_ptr += len;
548 urb = dlfb_get_urb(dev); 549 urb = dlfb_get_urb(dev);
549 if (!urb) 550 if (!urb)
550 return 1; /* lost_pixels is set */ 551 return 1; /* lost_pixels is set */
551 *urb_ptr = urb; 552 *urb_ptr = urb;
552 cmd = urb->transfer_buffer; 553 cmd = urb->transfer_buffer;
553 cmd_end = &cmd[urb->transfer_buffer_length]; 554 cmd_end = &cmd[urb->transfer_buffer_length];
554 } 555 }
555 } 556 }
556 557
557 *urb_buf_ptr = cmd; 558 *urb_buf_ptr = cmd;
558 559
559 return 0; 560 return 0;
560 } 561 }
561 562
562 int dlfb_handle_damage(struct dlfb_data *dev, int x, int y, 563 int dlfb_handle_damage(struct dlfb_data *dev, int x, int y,
563 int width, int height, char *data) 564 int width, int height, char *data)
564 { 565 {
565 int i, ret; 566 int i, ret;
566 char *cmd; 567 char *cmd;
567 cycles_t start_cycles, end_cycles; 568 cycles_t start_cycles, end_cycles;
568 int bytes_sent = 0; 569 int bytes_sent = 0;
569 int bytes_identical = 0; 570 int bytes_identical = 0;
570 struct urb *urb; 571 struct urb *urb;
571 int aligned_x; 572 int aligned_x;
572 573
573 start_cycles = get_cycles(); 574 start_cycles = get_cycles();
574 575
575 aligned_x = DL_ALIGN_DOWN(x, sizeof(unsigned long)); 576 aligned_x = DL_ALIGN_DOWN(x, sizeof(unsigned long));
576 width = DL_ALIGN_UP(width + (x-aligned_x), sizeof(unsigned long)); 577 width = DL_ALIGN_UP(width + (x-aligned_x), sizeof(unsigned long));
577 x = aligned_x; 578 x = aligned_x;
578 579
579 if ((width <= 0) || 580 if ((width <= 0) ||
580 (x + width > dev->info->var.xres) || 581 (x + width > dev->info->var.xres) ||
581 (y + height > dev->info->var.yres)) 582 (y + height > dev->info->var.yres))
582 return -EINVAL; 583 return -EINVAL;
583 584
584 if (!atomic_read(&dev->usb_active)) 585 if (!atomic_read(&dev->usb_active))
585 return 0; 586 return 0;
586 587
587 urb = dlfb_get_urb(dev); 588 urb = dlfb_get_urb(dev);
588 if (!urb) 589 if (!urb)
589 return 0; 590 return 0;
590 cmd = urb->transfer_buffer; 591 cmd = urb->transfer_buffer;
591 592
592 for (i = y; i < y + height ; i++) { 593 for (i = y; i < y + height ; i++) {
593 const int line_offset = dev->info->fix.line_length * i; 594 const int line_offset = dev->info->fix.line_length * i;
594 const int byte_offset = line_offset + (x * BPP); 595 const int byte_offset = line_offset + (x * BPP);
595 596
596 if (dlfb_render_hline(dev, &urb, 597 if (dlfb_render_hline(dev, &urb,
597 (char *) dev->info->fix.smem_start, 598 (char *) dev->info->fix.smem_start,
598 &cmd, byte_offset, width * BPP, 599 &cmd, byte_offset, width * BPP,
599 &bytes_identical, &bytes_sent)) 600 &bytes_identical, &bytes_sent))
600 goto error; 601 goto error;
601 } 602 }
602 603
603 if (cmd > (char *) urb->transfer_buffer) { 604 if (cmd > (char *) urb->transfer_buffer) {
604 /* Send partial buffer remaining before exiting */ 605 /* Send partial buffer remaining before exiting */
605 int len = cmd - (char *) urb->transfer_buffer; 606 int len = cmd - (char *) urb->transfer_buffer;
606 ret = dlfb_submit_urb(dev, urb, len); 607 ret = dlfb_submit_urb(dev, urb, len);
607 bytes_sent += len; 608 bytes_sent += len;
608 } else 609 } else
609 dlfb_urb_completion(urb); 610 dlfb_urb_completion(urb);
610 611
611 error: 612 error:
612 atomic_add(bytes_sent, &dev->bytes_sent); 613 atomic_add(bytes_sent, &dev->bytes_sent);
613 atomic_add(bytes_identical, &dev->bytes_identical); 614 atomic_add(bytes_identical, &dev->bytes_identical);
614 atomic_add(width*height*2, &dev->bytes_rendered); 615 atomic_add(width*height*2, &dev->bytes_rendered);
615 end_cycles = get_cycles(); 616 end_cycles = get_cycles();
616 atomic_add(((unsigned int) ((end_cycles - start_cycles) 617 atomic_add(((unsigned int) ((end_cycles - start_cycles)
617 >> 10)), /* Kcycles */ 618 >> 10)), /* Kcycles */
618 &dev->cpu_kcycles_used); 619 &dev->cpu_kcycles_used);
619 620
620 return 0; 621 return 0;
621 } 622 }
622 623
623 /* 624 /*
624 * Path triggered by usermode clients who write to filesystem 625 * Path triggered by usermode clients who write to filesystem
625 * e.g. cat filename > /dev/fb1 626 * e.g. cat filename > /dev/fb1
626 * Not used by X Windows or text-mode console. But useful for testing. 627 * Not used by X Windows or text-mode console. But useful for testing.
627 * Slow because of extra copy and we must assume all pixels dirty. 628 * Slow because of extra copy and we must assume all pixels dirty.
628 */ 629 */
629 static ssize_t dlfb_ops_write(struct fb_info *info, const char __user *buf, 630 static ssize_t dlfb_ops_write(struct fb_info *info, const char __user *buf,
630 size_t count, loff_t *ppos) 631 size_t count, loff_t *ppos)
631 { 632 {
632 ssize_t result; 633 ssize_t result;
633 struct dlfb_data *dev = info->par; 634 struct dlfb_data *dev = info->par;
634 u32 offset = (u32) *ppos; 635 u32 offset = (u32) *ppos;
635 636
636 result = fb_sys_write(info, buf, count, ppos); 637 result = fb_sys_write(info, buf, count, ppos);
637 638
638 if (result > 0) { 639 if (result > 0) {
639 int start = max((int)(offset / info->fix.line_length) - 1, 0); 640 int start = max((int)(offset / info->fix.line_length) - 1, 0);
640 int lines = min((u32)((result / info->fix.line_length) + 1), 641 int lines = min((u32)((result / info->fix.line_length) + 1),
641 (u32)info->var.yres); 642 (u32)info->var.yres);
642 643
643 dlfb_handle_damage(dev, 0, start, info->var.xres, 644 dlfb_handle_damage(dev, 0, start, info->var.xres,
644 lines, info->screen_base); 645 lines, info->screen_base);
645 } 646 }
646 647
647 return result; 648 return result;
648 } 649 }
649 650
650 /* hardware has native COPY command (see libdlo), but not worth it for fbcon */ 651 /* hardware has native COPY command (see libdlo), but not worth it for fbcon */
651 static void dlfb_ops_copyarea(struct fb_info *info, 652 static void dlfb_ops_copyarea(struct fb_info *info,
652 const struct fb_copyarea *area) 653 const struct fb_copyarea *area)
653 { 654 {
654 655
655 struct dlfb_data *dev = info->par; 656 struct dlfb_data *dev = info->par;
656 657
657 sys_copyarea(info, area); 658 sys_copyarea(info, area);
658 659
659 dlfb_handle_damage(dev, area->dx, area->dy, 660 dlfb_handle_damage(dev, area->dx, area->dy,
660 area->width, area->height, info->screen_base); 661 area->width, area->height, info->screen_base);
661 } 662 }
662 663
663 static void dlfb_ops_imageblit(struct fb_info *info, 664 static void dlfb_ops_imageblit(struct fb_info *info,
664 const struct fb_image *image) 665 const struct fb_image *image)
665 { 666 {
666 struct dlfb_data *dev = info->par; 667 struct dlfb_data *dev = info->par;
667 668
668 sys_imageblit(info, image); 669 sys_imageblit(info, image);
669 670
670 dlfb_handle_damage(dev, image->dx, image->dy, 671 dlfb_handle_damage(dev, image->dx, image->dy,
671 image->width, image->height, info->screen_base); 672 image->width, image->height, info->screen_base);
672 } 673 }
673 674
674 static void dlfb_ops_fillrect(struct fb_info *info, 675 static void dlfb_ops_fillrect(struct fb_info *info,
675 const struct fb_fillrect *rect) 676 const struct fb_fillrect *rect)
676 { 677 {
677 struct dlfb_data *dev = info->par; 678 struct dlfb_data *dev = info->par;
678 679
679 sys_fillrect(info, rect); 680 sys_fillrect(info, rect);
680 681
681 dlfb_handle_damage(dev, rect->dx, rect->dy, rect->width, 682 dlfb_handle_damage(dev, rect->dx, rect->dy, rect->width,
682 rect->height, info->screen_base); 683 rect->height, info->screen_base);
683 } 684 }
684 685
685 /* 686 /*
686 * NOTE: fb_defio.c is holding info->fbdefio.mutex 687 * NOTE: fb_defio.c is holding info->fbdefio.mutex
687 * Touching ANY framebuffer memory that triggers a page fault 688 * Touching ANY framebuffer memory that triggers a page fault
688 * in fb_defio will cause a deadlock, when it also tries to 689 * in fb_defio will cause a deadlock, when it also tries to
689 * grab the same mutex. 690 * grab the same mutex.
690 */ 691 */
691 static void dlfb_dpy_deferred_io(struct fb_info *info, 692 static void dlfb_dpy_deferred_io(struct fb_info *info,
692 struct list_head *pagelist) 693 struct list_head *pagelist)
693 { 694 {
694 struct page *cur; 695 struct page *cur;
695 struct fb_deferred_io *fbdefio = info->fbdefio; 696 struct fb_deferred_io *fbdefio = info->fbdefio;
696 struct dlfb_data *dev = info->par; 697 struct dlfb_data *dev = info->par;
697 struct urb *urb; 698 struct urb *urb;
698 char *cmd; 699 char *cmd;
699 cycles_t start_cycles, end_cycles; 700 cycles_t start_cycles, end_cycles;
700 int bytes_sent = 0; 701 int bytes_sent = 0;
701 int bytes_identical = 0; 702 int bytes_identical = 0;
702 int bytes_rendered = 0; 703 int bytes_rendered = 0;
703 704
704 if (!fb_defio) 705 if (!fb_defio)
705 return; 706 return;
706 707
707 if (!atomic_read(&dev->usb_active)) 708 if (!atomic_read(&dev->usb_active))
708 return; 709 return;
709 710
710 start_cycles = get_cycles(); 711 start_cycles = get_cycles();
711 712
712 urb = dlfb_get_urb(dev); 713 urb = dlfb_get_urb(dev);
713 if (!urb) 714 if (!urb)
714 return; 715 return;
715 716
716 cmd = urb->transfer_buffer; 717 cmd = urb->transfer_buffer;
717 718
718 /* walk the written page list and render each to device */ 719 /* walk the written page list and render each to device */
719 list_for_each_entry(cur, &fbdefio->pagelist, lru) { 720 list_for_each_entry(cur, &fbdefio->pagelist, lru) {
720 721
721 if (dlfb_render_hline(dev, &urb, (char *) info->fix.smem_start, 722 if (dlfb_render_hline(dev, &urb, (char *) info->fix.smem_start,
722 &cmd, cur->index << PAGE_SHIFT, 723 &cmd, cur->index << PAGE_SHIFT,
723 PAGE_SIZE, &bytes_identical, &bytes_sent)) 724 PAGE_SIZE, &bytes_identical, &bytes_sent))
724 goto error; 725 goto error;
725 bytes_rendered += PAGE_SIZE; 726 bytes_rendered += PAGE_SIZE;
726 } 727 }
727 728
728 if (cmd > (char *) urb->transfer_buffer) { 729 if (cmd > (char *) urb->transfer_buffer) {
729 /* Send partial buffer remaining before exiting */ 730 /* Send partial buffer remaining before exiting */
730 int len = cmd - (char *) urb->transfer_buffer; 731 int len = cmd - (char *) urb->transfer_buffer;
731 dlfb_submit_urb(dev, urb, len); 732 dlfb_submit_urb(dev, urb, len);
732 bytes_sent += len; 733 bytes_sent += len;
733 } else 734 } else
734 dlfb_urb_completion(urb); 735 dlfb_urb_completion(urb);
735 736
736 error: 737 error:
737 atomic_add(bytes_sent, &dev->bytes_sent); 738 atomic_add(bytes_sent, &dev->bytes_sent);
738 atomic_add(bytes_identical, &dev->bytes_identical); 739 atomic_add(bytes_identical, &dev->bytes_identical);
739 atomic_add(bytes_rendered, &dev->bytes_rendered); 740 atomic_add(bytes_rendered, &dev->bytes_rendered);
740 end_cycles = get_cycles(); 741 end_cycles = get_cycles();
741 atomic_add(((unsigned int) ((end_cycles - start_cycles) 742 atomic_add(((unsigned int) ((end_cycles - start_cycles)
742 >> 10)), /* Kcycles */ 743 >> 10)), /* Kcycles */
743 &dev->cpu_kcycles_used); 744 &dev->cpu_kcycles_used);
744 } 745 }
745 746
746 static int dlfb_get_edid(struct dlfb_data *dev, char *edid, int len) 747 static int dlfb_get_edid(struct dlfb_data *dev, char *edid, int len)
747 { 748 {
748 int i; 749 int i;
749 int ret; 750 int ret;
750 char *rbuf; 751 char *rbuf;
751 752
752 rbuf = kmalloc(2, GFP_KERNEL); 753 rbuf = kmalloc(2, GFP_KERNEL);
753 if (!rbuf) 754 if (!rbuf)
754 return 0; 755 return 0;
755 756
756 for (i = 0; i < len; i++) { 757 for (i = 0; i < len; i++) {
757 ret = usb_control_msg(dev->udev, 758 ret = usb_control_msg(dev->udev,
758 usb_rcvctrlpipe(dev->udev, 0), (0x02), 759 usb_rcvctrlpipe(dev->udev, 0), (0x02),
759 (0x80 | (0x02 << 5)), i << 8, 0xA1, rbuf, 2, 760 (0x80 | (0x02 << 5)), i << 8, 0xA1, rbuf, 2,
760 HZ); 761 HZ);
761 if (ret < 1) { 762 if (ret < 1) {
762 pr_err("Read EDID byte %d failed err %x\n", i, ret); 763 pr_err("Read EDID byte %d failed err %x\n", i, ret);
763 i--; 764 i--;
764 break; 765 break;
765 } 766 }
766 edid[i] = rbuf[1]; 767 edid[i] = rbuf[1];
767 } 768 }
768 769
769 kfree(rbuf); 770 kfree(rbuf);
770 771
771 return i; 772 return i;
772 } 773 }
773 774
774 static int dlfb_ops_ioctl(struct fb_info *info, unsigned int cmd, 775 static int dlfb_ops_ioctl(struct fb_info *info, unsigned int cmd,
775 unsigned long arg) 776 unsigned long arg)
776 { 777 {
777 778
778 struct dlfb_data *dev = info->par; 779 struct dlfb_data *dev = info->par;
779 780
780 if (!atomic_read(&dev->usb_active)) 781 if (!atomic_read(&dev->usb_active))
781 return 0; 782 return 0;
782 783
783 /* TODO: Update X server to get this from sysfs instead */ 784 /* TODO: Update X server to get this from sysfs instead */
784 if (cmd == DLFB_IOCTL_RETURN_EDID) { 785 if (cmd == DLFB_IOCTL_RETURN_EDID) {
785 void __user *edid = (void __user *)arg; 786 void __user *edid = (void __user *)arg;
786 if (copy_to_user(edid, dev->edid, dev->edid_size)) 787 if (copy_to_user(edid, dev->edid, dev->edid_size))
787 return -EFAULT; 788 return -EFAULT;
788 return 0; 789 return 0;
789 } 790 }
790 791
791 /* TODO: Help propose a standard fb.h ioctl to report mmap damage */ 792 /* TODO: Help propose a standard fb.h ioctl to report mmap damage */
792 if (cmd == DLFB_IOCTL_REPORT_DAMAGE) { 793 if (cmd == DLFB_IOCTL_REPORT_DAMAGE) {
793 struct dloarea area; 794 struct dloarea area;
794 795
795 if (copy_from_user(&area, (void __user *)arg, 796 if (copy_from_user(&area, (void __user *)arg,
796 sizeof(struct dloarea))) 797 sizeof(struct dloarea)))
797 return -EFAULT; 798 return -EFAULT;
798 799
799 /* 800 /*
800 * If we have a damage-aware client, turn fb_defio "off" 801 * If we have a damage-aware client, turn fb_defio "off"
801 * To avoid perf imact of unnecessary page fault handling. 802 * To avoid perf imact of unnecessary page fault handling.
802 * Done by resetting the delay for this fb_info to a very 803 * Done by resetting the delay for this fb_info to a very
803 * long period. Pages will become writable and stay that way. 804 * long period. Pages will become writable and stay that way.
804 * Reset to normal value when all clients have closed this fb. 805 * Reset to normal value when all clients have closed this fb.
805 */ 806 */
806 if (info->fbdefio) 807 if (info->fbdefio)
807 info->fbdefio->delay = DL_DEFIO_WRITE_DISABLE; 808 info->fbdefio->delay = DL_DEFIO_WRITE_DISABLE;
808 809
809 if (area.x < 0) 810 if (area.x < 0)
810 area.x = 0; 811 area.x = 0;
811 812
812 if (area.x > info->var.xres) 813 if (area.x > info->var.xres)
813 area.x = info->var.xres; 814 area.x = info->var.xres;
814 815
815 if (area.y < 0) 816 if (area.y < 0)
816 area.y = 0; 817 area.y = 0;
817 818
818 if (area.y > info->var.yres) 819 if (area.y > info->var.yres)
819 area.y = info->var.yres; 820 area.y = info->var.yres;
820 821
821 dlfb_handle_damage(dev, area.x, area.y, area.w, area.h, 822 dlfb_handle_damage(dev, area.x, area.y, area.w, area.h,
822 info->screen_base); 823 info->screen_base);
823 } 824 }
824 825
825 return 0; 826 return 0;
826 } 827 }
827 828
828 /* taken from vesafb */ 829 /* taken from vesafb */
829 static int 830 static int
830 dlfb_ops_setcolreg(unsigned regno, unsigned red, unsigned green, 831 dlfb_ops_setcolreg(unsigned regno, unsigned red, unsigned green,
831 unsigned blue, unsigned transp, struct fb_info *info) 832 unsigned blue, unsigned transp, struct fb_info *info)
832 { 833 {
833 int err = 0; 834 int err = 0;
834 835
835 if (regno >= info->cmap.len) 836 if (regno >= info->cmap.len)
836 return 1; 837 return 1;
837 838
838 if (regno < 16) { 839 if (regno < 16) {
839 if (info->var.red.offset == 10) { 840 if (info->var.red.offset == 10) {
840 /* 1:5:5:5 */ 841 /* 1:5:5:5 */
841 ((u32 *) (info->pseudo_palette))[regno] = 842 ((u32 *) (info->pseudo_palette))[regno] =
842 ((red & 0xf800) >> 1) | 843 ((red & 0xf800) >> 1) |
843 ((green & 0xf800) >> 6) | ((blue & 0xf800) >> 11); 844 ((green & 0xf800) >> 6) | ((blue & 0xf800) >> 11);
844 } else { 845 } else {
845 /* 0:5:6:5 */ 846 /* 0:5:6:5 */
846 ((u32 *) (info->pseudo_palette))[regno] = 847 ((u32 *) (info->pseudo_palette))[regno] =
847 ((red & 0xf800)) | 848 ((red & 0xf800)) |
848 ((green & 0xfc00) >> 5) | ((blue & 0xf800) >> 11); 849 ((green & 0xfc00) >> 5) | ((blue & 0xf800) >> 11);
849 } 850 }
850 } 851 }
851 852
852 return err; 853 return err;
853 } 854 }
854 855
855 /* 856 /*
856 * It's common for several clients to have framebuffer open simultaneously. 857 * It's common for several clients to have framebuffer open simultaneously.
857 * e.g. both fbcon and X. Makes things interesting. 858 * e.g. both fbcon and X. Makes things interesting.
858 * Assumes caller is holding info->lock (for open and release at least) 859 * Assumes caller is holding info->lock (for open and release at least)
859 */ 860 */
860 static int dlfb_ops_open(struct fb_info *info, int user) 861 static int dlfb_ops_open(struct fb_info *info, int user)
861 { 862 {
862 struct dlfb_data *dev = info->par; 863 struct dlfb_data *dev = info->par;
863 864
864 /* 865 /*
865 * fbcon aggressively connects to first framebuffer it finds, 866 * fbcon aggressively connects to first framebuffer it finds,
866 * preventing other clients (X) from working properly. Usually 867 * preventing other clients (X) from working properly. Usually
867 * not what the user wants. Fail by default with option to enable. 868 * not what the user wants. Fail by default with option to enable.
868 */ 869 */
869 if ((user == 0) && (!console)) 870 if ((user == 0) && (!console))
870 return -EBUSY; 871 return -EBUSY;
871 872
872 /* If the USB device is gone, we don't accept new opens */ 873 /* If the USB device is gone, we don't accept new opens */
873 if (dev->virtualized) 874 if (dev->virtualized)
874 return -ENODEV; 875 return -ENODEV;
875 876
876 dev->fb_count++; 877 dev->fb_count++;
877 878
878 kref_get(&dev->kref); 879 kref_get(&dev->kref);
879 880
880 if (fb_defio && (info->fbdefio == NULL)) { 881 if (fb_defio && (info->fbdefio == NULL)) {
881 /* enable defio at last moment if not disabled by client */ 882 /* enable defio at last moment if not disabled by client */
882 883
883 struct fb_deferred_io *fbdefio; 884 struct fb_deferred_io *fbdefio;
884 885
885 fbdefio = kmalloc(sizeof(struct fb_deferred_io), GFP_KERNEL); 886 fbdefio = kmalloc(sizeof(struct fb_deferred_io), GFP_KERNEL);
886 887
887 if (fbdefio) { 888 if (fbdefio) {
888 fbdefio->delay = DL_DEFIO_WRITE_DELAY; 889 fbdefio->delay = DL_DEFIO_WRITE_DELAY;
889 fbdefio->deferred_io = dlfb_dpy_deferred_io; 890 fbdefio->deferred_io = dlfb_dpy_deferred_io;
890 } 891 }
891 892
892 info->fbdefio = fbdefio; 893 info->fbdefio = fbdefio;
893 fb_deferred_io_init(info); 894 fb_deferred_io_init(info);
894 } 895 }
895 896
896 pr_notice("open /dev/fb%d user=%d fb_info=%p count=%d\n", 897 pr_notice("open /dev/fb%d user=%d fb_info=%p count=%d\n",
897 info->node, user, info, dev->fb_count); 898 info->node, user, info, dev->fb_count);
898 899
899 return 0; 900 return 0;
900 } 901 }
901 902
902 /* 903 /*
903 * Called when all client interfaces to start transactions have been disabled, 904 * Called when all client interfaces to start transactions have been disabled,
904 * and all references to our device instance (dlfb_data) are released. 905 * and all references to our device instance (dlfb_data) are released.
905 * Every transaction must have a reference, so we know are fully spun down 906 * Every transaction must have a reference, so we know are fully spun down
906 */ 907 */
907 static void dlfb_free(struct kref *kref) 908 static void dlfb_free(struct kref *kref)
908 { 909 {
909 struct dlfb_data *dev = container_of(kref, struct dlfb_data, kref); 910 struct dlfb_data *dev = container_of(kref, struct dlfb_data, kref);
910 911
911 /* this function will wait for all in-flight urbs to complete */ 912 /* this function will wait for all in-flight urbs to complete */
912 if (dev->urbs.count > 0) 913 if (dev->urbs.count > 0)
913 dlfb_free_urb_list(dev); 914 dlfb_free_urb_list(dev);
914 915
915 if (dev->backing_buffer) 916 if (dev->backing_buffer)
916 vfree(dev->backing_buffer); 917 vfree(dev->backing_buffer);
917 918
918 kfree(dev->edid); 919 kfree(dev->edid);
919 920
920 pr_warn("freeing dlfb_data %p\n", dev); 921 pr_warn("freeing dlfb_data %p\n", dev);
921 922
922 kfree(dev); 923 kfree(dev);
923 } 924 }
924 925
925 static void dlfb_release_urb_work(struct work_struct *work) 926 static void dlfb_release_urb_work(struct work_struct *work)
926 { 927 {
927 struct urb_node *unode = container_of(work, struct urb_node, 928 struct urb_node *unode = container_of(work, struct urb_node,
928 release_urb_work.work); 929 release_urb_work.work);
929 930
930 up(&unode->dev->urbs.limit_sem); 931 up(&unode->dev->urbs.limit_sem);
931 } 932 }
932 933
933 static void dlfb_free_framebuffer_work(struct work_struct *work) 934 static void dlfb_free_framebuffer_work(struct work_struct *work)
934 { 935 {
935 struct dlfb_data *dev = container_of(work, struct dlfb_data, 936 struct dlfb_data *dev = container_of(work, struct dlfb_data,
936 free_framebuffer_work.work); 937 free_framebuffer_work.work);
937 struct fb_info *info = dev->info; 938 struct fb_info *info = dev->info;
938 int node = info->node; 939 int node = info->node;
939 940
940 unregister_framebuffer(info); 941 unregister_framebuffer(info);
941 942
942 if (info->cmap.len != 0) 943 if (info->cmap.len != 0)
943 fb_dealloc_cmap(&info->cmap); 944 fb_dealloc_cmap(&info->cmap);
944 if (info->monspecs.modedb) 945 if (info->monspecs.modedb)
945 fb_destroy_modedb(info->monspecs.modedb); 946 fb_destroy_modedb(info->monspecs.modedb);
946 if (info->screen_base) 947 if (info->screen_base)
947 vfree(info->screen_base); 948 vfree(info->screen_base);
948 949
949 fb_destroy_modelist(&info->modelist); 950 fb_destroy_modelist(&info->modelist);
950 951
951 dev->info = 0; 952 dev->info = 0;
952 953
953 /* Assume info structure is freed after this point */ 954 /* Assume info structure is freed after this point */
954 framebuffer_release(info); 955 framebuffer_release(info);
955 956
956 pr_warn("fb_info for /dev/fb%d has been freed\n", node); 957 pr_warn("fb_info for /dev/fb%d has been freed\n", node);
957 958
958 /* ref taken in probe() as part of registering framebfufer */ 959 /* ref taken in probe() as part of registering framebfufer */
959 kref_put(&dev->kref, dlfb_free); 960 kref_put(&dev->kref, dlfb_free);
960 } 961 }
961 962
962 /* 963 /*
963 * Assumes caller is holding info->lock mutex (for open and release at least) 964 * Assumes caller is holding info->lock mutex (for open and release at least)
964 */ 965 */
965 static int dlfb_ops_release(struct fb_info *info, int user) 966 static int dlfb_ops_release(struct fb_info *info, int user)
966 { 967 {
967 struct dlfb_data *dev = info->par; 968 struct dlfb_data *dev = info->par;
968 969
969 dev->fb_count--; 970 dev->fb_count--;
970 971
971 /* We can't free fb_info here - fbmem will touch it when we return */ 972 /* We can't free fb_info here - fbmem will touch it when we return */
972 if (dev->virtualized && (dev->fb_count == 0)) 973 if (dev->virtualized && (dev->fb_count == 0))
973 schedule_delayed_work(&dev->free_framebuffer_work, HZ); 974 schedule_delayed_work(&dev->free_framebuffer_work, HZ);
974 975
975 if ((dev->fb_count == 0) && (info->fbdefio)) { 976 if ((dev->fb_count == 0) && (info->fbdefio)) {
976 fb_deferred_io_cleanup(info); 977 fb_deferred_io_cleanup(info);
977 kfree(info->fbdefio); 978 kfree(info->fbdefio);
978 info->fbdefio = NULL; 979 info->fbdefio = NULL;
979 info->fbops->fb_mmap = dlfb_ops_mmap; 980 info->fbops->fb_mmap = dlfb_ops_mmap;
980 } 981 }
981 982
982 pr_warn("released /dev/fb%d user=%d count=%d\n", 983 pr_warn("released /dev/fb%d user=%d count=%d\n",
983 info->node, user, dev->fb_count); 984 info->node, user, dev->fb_count);
984 985
985 kref_put(&dev->kref, dlfb_free); 986 kref_put(&dev->kref, dlfb_free);
986 987
987 return 0; 988 return 0;
988 } 989 }
989 990
990 /* 991 /*
991 * Check whether a video mode is supported by the DisplayLink chip 992 * Check whether a video mode is supported by the DisplayLink chip
992 * We start from monitor's modes, so don't need to filter that here 993 * We start from monitor's modes, so don't need to filter that here
993 */ 994 */
994 static int dlfb_is_valid_mode(struct fb_videomode *mode, 995 static int dlfb_is_valid_mode(struct fb_videomode *mode,
995 struct fb_info *info) 996 struct fb_info *info)
996 { 997 {
997 struct dlfb_data *dev = info->par; 998 struct dlfb_data *dev = info->par;
998 999
999 if (mode->xres * mode->yres > dev->sku_pixel_limit) { 1000 if (mode->xres * mode->yres > dev->sku_pixel_limit) {
1000 pr_warn("%dx%d beyond chip capabilities\n", 1001 pr_warn("%dx%d beyond chip capabilities\n",
1001 mode->xres, mode->yres); 1002 mode->xres, mode->yres);
1002 return 0; 1003 return 0;
1003 } 1004 }
1004 1005
1005 pr_info("%dx%d valid mode\n", mode->xres, mode->yres); 1006 pr_info("%dx%d valid mode\n", mode->xres, mode->yres);
1006 1007
1007 return 1; 1008 return 1;
1008 } 1009 }
1009 1010
1010 static void dlfb_var_color_format(struct fb_var_screeninfo *var) 1011 static void dlfb_var_color_format(struct fb_var_screeninfo *var)
1011 { 1012 {
1012 const struct fb_bitfield red = { 11, 5, 0 }; 1013 const struct fb_bitfield red = { 11, 5, 0 };
1013 const struct fb_bitfield green = { 5, 6, 0 }; 1014 const struct fb_bitfield green = { 5, 6, 0 };
1014 const struct fb_bitfield blue = { 0, 5, 0 }; 1015 const struct fb_bitfield blue = { 0, 5, 0 };
1015 1016
1016 var->bits_per_pixel = 16; 1017 var->bits_per_pixel = 16;
1017 var->red = red; 1018 var->red = red;
1018 var->green = green; 1019 var->green = green;
1019 var->blue = blue; 1020 var->blue = blue;
1020 } 1021 }
1021 1022
1022 static int dlfb_ops_check_var(struct fb_var_screeninfo *var, 1023 static int dlfb_ops_check_var(struct fb_var_screeninfo *var,
1023 struct fb_info *info) 1024 struct fb_info *info)
1024 { 1025 {
1025 struct fb_videomode mode; 1026 struct fb_videomode mode;
1026 1027
1027 /* TODO: support dynamically changing framebuffer size */ 1028 /* TODO: support dynamically changing framebuffer size */
1028 if ((var->xres * var->yres * 2) > info->fix.smem_len) 1029 if ((var->xres * var->yres * 2) > info->fix.smem_len)
1029 return -EINVAL; 1030 return -EINVAL;
1030 1031
1031 /* set device-specific elements of var unrelated to mode */ 1032 /* set device-specific elements of var unrelated to mode */
1032 dlfb_var_color_format(var); 1033 dlfb_var_color_format(var);
1033 1034
1034 fb_var_to_videomode(&mode, var); 1035 fb_var_to_videomode(&mode, var);
1035 1036
1036 if (!dlfb_is_valid_mode(&mode, info)) 1037 if (!dlfb_is_valid_mode(&mode, info))
1037 return -EINVAL; 1038 return -EINVAL;
1038 1039
1039 return 0; 1040 return 0;
1040 } 1041 }
1041 1042
1042 static int dlfb_ops_set_par(struct fb_info *info) 1043 static int dlfb_ops_set_par(struct fb_info *info)
1043 { 1044 {
1044 struct dlfb_data *dev = info->par; 1045 struct dlfb_data *dev = info->par;
1045 int result; 1046 int result;
1046 u16 *pix_framebuffer; 1047 u16 *pix_framebuffer;
1047 int i; 1048 int i;
1048 1049
1049 pr_notice("set_par mode %dx%d\n", info->var.xres, info->var.yres); 1050 pr_notice("set_par mode %dx%d\n", info->var.xres, info->var.yres);
1050 1051
1051 result = dlfb_set_video_mode(dev, &info->var); 1052 result = dlfb_set_video_mode(dev, &info->var);
1052 1053
1053 if ((result == 0) && (dev->fb_count == 0)) { 1054 if ((result == 0) && (dev->fb_count == 0)) {
1054 1055
1055 /* paint greenscreen */ 1056 /* paint greenscreen */
1056 1057
1057 pix_framebuffer = (u16 *) info->screen_base; 1058 pix_framebuffer = (u16 *) info->screen_base;
1058 for (i = 0; i < info->fix.smem_len / 2; i++) 1059 for (i = 0; i < info->fix.smem_len / 2; i++)
1059 pix_framebuffer[i] = 0x37e6; 1060 pix_framebuffer[i] = 0x37e6;
1060 1061
1061 dlfb_handle_damage(dev, 0, 0, info->var.xres, info->var.yres, 1062 dlfb_handle_damage(dev, 0, 0, info->var.xres, info->var.yres,
1062 info->screen_base); 1063 info->screen_base);
1063 } 1064 }
1064 1065
1065 return result; 1066 return result;
1066 } 1067 }
1067 1068
1068 /* To fonzi the jukebox (e.g. make blanking changes take effect) */ 1069 /* To fonzi the jukebox (e.g. make blanking changes take effect) */
1069 static char *dlfb_dummy_render(char *buf) 1070 static char *dlfb_dummy_render(char *buf)
1070 { 1071 {
1071 *buf++ = 0xAF; 1072 *buf++ = 0xAF;
1072 *buf++ = 0x6A; /* copy */ 1073 *buf++ = 0x6A; /* copy */
1073 *buf++ = 0x00; /* from address*/ 1074 *buf++ = 0x00; /* from address*/
1074 *buf++ = 0x00; 1075 *buf++ = 0x00;
1075 *buf++ = 0x00; 1076 *buf++ = 0x00;
1076 *buf++ = 0x01; /* one pixel */ 1077 *buf++ = 0x01; /* one pixel */
1077 *buf++ = 0x00; /* to address */ 1078 *buf++ = 0x00; /* to address */
1078 *buf++ = 0x00; 1079 *buf++ = 0x00;
1079 *buf++ = 0x00; 1080 *buf++ = 0x00;
1080 return buf; 1081 return buf;
1081 } 1082 }
1082 1083
1083 /* 1084 /*
1084 * In order to come back from full DPMS off, we need to set the mode again 1085 * In order to come back from full DPMS off, we need to set the mode again
1085 */ 1086 */
1086 static int dlfb_ops_blank(int blank_mode, struct fb_info *info) 1087 static int dlfb_ops_blank(int blank_mode, struct fb_info *info)
1087 { 1088 {
1088 struct dlfb_data *dev = info->par; 1089 struct dlfb_data *dev = info->par;
1089 char *bufptr; 1090 char *bufptr;
1090 struct urb *urb; 1091 struct urb *urb;
1091 1092
1092 pr_info("/dev/fb%d FB_BLANK mode %d --> %d\n", 1093 pr_info("/dev/fb%d FB_BLANK mode %d --> %d\n",
1093 info->node, dev->blank_mode, blank_mode); 1094 info->node, dev->blank_mode, blank_mode);
1094 1095
1095 if ((dev->blank_mode == FB_BLANK_POWERDOWN) && 1096 if ((dev->blank_mode == FB_BLANK_POWERDOWN) &&
1096 (blank_mode != FB_BLANK_POWERDOWN)) { 1097 (blank_mode != FB_BLANK_POWERDOWN)) {
1097 1098
1098 /* returning from powerdown requires a fresh modeset */ 1099 /* returning from powerdown requires a fresh modeset */
1099 dlfb_set_video_mode(dev, &info->var); 1100 dlfb_set_video_mode(dev, &info->var);
1100 } 1101 }
1101 1102
1102 urb = dlfb_get_urb(dev); 1103 urb = dlfb_get_urb(dev);
1103 if (!urb) 1104 if (!urb)
1104 return 0; 1105 return 0;
1105 1106
1106 bufptr = (char *) urb->transfer_buffer; 1107 bufptr = (char *) urb->transfer_buffer;
1107 bufptr = dlfb_vidreg_lock(bufptr); 1108 bufptr = dlfb_vidreg_lock(bufptr);
1108 bufptr = dlfb_blanking(bufptr, blank_mode); 1109 bufptr = dlfb_blanking(bufptr, blank_mode);
1109 bufptr = dlfb_vidreg_unlock(bufptr); 1110 bufptr = dlfb_vidreg_unlock(bufptr);
1110 1111
1111 /* seems like a render op is needed to have blank change take effect */ 1112 /* seems like a render op is needed to have blank change take effect */
1112 bufptr = dlfb_dummy_render(bufptr); 1113 bufptr = dlfb_dummy_render(bufptr);
1113 1114
1114 dlfb_submit_urb(dev, urb, bufptr - 1115 dlfb_submit_urb(dev, urb, bufptr -
1115 (char *) urb->transfer_buffer); 1116 (char *) urb->transfer_buffer);
1116 1117
1117 dev->blank_mode = blank_mode; 1118 dev->blank_mode = blank_mode;
1118 1119
1119 return 0; 1120 return 0;
1120 } 1121 }
1121 1122
1122 static struct fb_ops dlfb_ops = { 1123 static struct fb_ops dlfb_ops = {
1123 .owner = THIS_MODULE, 1124 .owner = THIS_MODULE,
1124 .fb_read = fb_sys_read, 1125 .fb_read = fb_sys_read,
1125 .fb_write = dlfb_ops_write, 1126 .fb_write = dlfb_ops_write,
1126 .fb_setcolreg = dlfb_ops_setcolreg, 1127 .fb_setcolreg = dlfb_ops_setcolreg,
1127 .fb_fillrect = dlfb_ops_fillrect, 1128 .fb_fillrect = dlfb_ops_fillrect,
1128 .fb_copyarea = dlfb_ops_copyarea, 1129 .fb_copyarea = dlfb_ops_copyarea,
1129 .fb_imageblit = dlfb_ops_imageblit, 1130 .fb_imageblit = dlfb_ops_imageblit,
1130 .fb_mmap = dlfb_ops_mmap, 1131 .fb_mmap = dlfb_ops_mmap,
1131 .fb_ioctl = dlfb_ops_ioctl, 1132 .fb_ioctl = dlfb_ops_ioctl,
1132 .fb_open = dlfb_ops_open, 1133 .fb_open = dlfb_ops_open,
1133 .fb_release = dlfb_ops_release, 1134 .fb_release = dlfb_ops_release,
1134 .fb_blank = dlfb_ops_blank, 1135 .fb_blank = dlfb_ops_blank,
1135 .fb_check_var = dlfb_ops_check_var, 1136 .fb_check_var = dlfb_ops_check_var,
1136 .fb_set_par = dlfb_ops_set_par, 1137 .fb_set_par = dlfb_ops_set_par,
1137 }; 1138 };
1138 1139
1139 1140
1140 /* 1141 /*
1141 * Assumes &info->lock held by caller 1142 * Assumes &info->lock held by caller
1142 * Assumes no active clients have framebuffer open 1143 * Assumes no active clients have framebuffer open
1143 */ 1144 */
1144 static int dlfb_realloc_framebuffer(struct dlfb_data *dev, struct fb_info *info) 1145 static int dlfb_realloc_framebuffer(struct dlfb_data *dev, struct fb_info *info)
1145 { 1146 {
1146 int retval = -ENOMEM; 1147 int retval = -ENOMEM;
1147 int old_len = info->fix.smem_len; 1148 int old_len = info->fix.smem_len;
1148 int new_len; 1149 int new_len;
1149 unsigned char *old_fb = info->screen_base; 1150 unsigned char *old_fb = info->screen_base;
1150 unsigned char *new_fb; 1151 unsigned char *new_fb;
1151 unsigned char *new_back; 1152 unsigned char *new_back = 0;
1152 1153
1153 pr_warn("Reallocating framebuffer. Addresses will change!\n"); 1154 pr_warn("Reallocating framebuffer. Addresses will change!\n");
1154 1155
1155 new_len = info->fix.line_length * info->var.yres; 1156 new_len = info->fix.line_length * info->var.yres;
1156 1157
1157 if (PAGE_ALIGN(new_len) > old_len) { 1158 if (PAGE_ALIGN(new_len) > old_len) {
1158 /* 1159 /*
1159 * Alloc system memory for virtual framebuffer 1160 * Alloc system memory for virtual framebuffer
1160 */ 1161 */
1161 new_fb = vmalloc(new_len); 1162 new_fb = vmalloc(new_len);
1162 if (!new_fb) { 1163 if (!new_fb) {
1163 pr_err("Virtual framebuffer alloc failed\n"); 1164 pr_err("Virtual framebuffer alloc failed\n");
1164 goto error; 1165 goto error;
1165 } 1166 }
1166 1167
1167 if (info->screen_base) { 1168 if (info->screen_base) {
1168 memcpy(new_fb, old_fb, old_len); 1169 memcpy(new_fb, old_fb, old_len);
1169 vfree(info->screen_base); 1170 vfree(info->screen_base);
1170 } 1171 }
1171 1172
1172 info->screen_base = new_fb; 1173 info->screen_base = new_fb;
1173 info->fix.smem_len = PAGE_ALIGN(new_len); 1174 info->fix.smem_len = PAGE_ALIGN(new_len);
1174 info->fix.smem_start = (unsigned long) new_fb; 1175 info->fix.smem_start = (unsigned long) new_fb;
1175 info->flags = udlfb_info_flags; 1176 info->flags = udlfb_info_flags;
1176 1177
1177 /* 1178 /*
1178 * Second framebuffer copy to mirror the framebuffer state 1179 * Second framebuffer copy to mirror the framebuffer state
1179 * on the physical USB device. We can function without this. 1180 * on the physical USB device. We can function without this.
1180 * But with imperfect damage info we may send pixels over USB 1181 * But with imperfect damage info we may send pixels over USB
1181 * that were, in fact, unchanged - wasting limited USB bandwidth 1182 * that were, in fact, unchanged - wasting limited USB bandwidth
1182 */ 1183 */
1183 new_back = vzalloc(new_len); 1184 if (shadow)
1185 new_back = vzalloc(new_len);
1184 if (!new_back) 1186 if (!new_back)
1185 pr_info("No shadow/backing buffer allocated\n"); 1187 pr_info("No shadow/backing buffer allocated\n");
1186 else { 1188 else {
1187 if (dev->backing_buffer) 1189 if (dev->backing_buffer)
1188 vfree(dev->backing_buffer); 1190 vfree(dev->backing_buffer);
1189 dev->backing_buffer = new_back; 1191 dev->backing_buffer = new_back;
1190 } 1192 }
1191 } 1193 }
1192 1194
1193 retval = 0; 1195 retval = 0;
1194 1196
1195 error: 1197 error:
1196 return retval; 1198 return retval;
1197 } 1199 }
1198 1200
1199 /* 1201 /*
1200 * 1) Get EDID from hw, or use sw default 1202 * 1) Get EDID from hw, or use sw default
1201 * 2) Parse into various fb_info structs 1203 * 2) Parse into various fb_info structs
1202 * 3) Allocate virtual framebuffer memory to back highest res mode 1204 * 3) Allocate virtual framebuffer memory to back highest res mode
1203 * 1205 *
1204 * Parses EDID into three places used by various parts of fbdev: 1206 * Parses EDID into three places used by various parts of fbdev:
1205 * fb_var_screeninfo contains the timing of the monitor's preferred mode 1207 * fb_var_screeninfo contains the timing of the monitor's preferred mode
1206 * fb_info.monspecs is full parsed EDID info, including monspecs.modedb 1208 * fb_info.monspecs is full parsed EDID info, including monspecs.modedb
1207 * fb_info.modelist is a linked list of all monitor & VESA modes which work 1209 * fb_info.modelist is a linked list of all monitor & VESA modes which work
1208 * 1210 *
1209 * If EDID is not readable/valid, then modelist is all VESA modes, 1211 * If EDID is not readable/valid, then modelist is all VESA modes,
1210 * monspecs is NULL, and fb_var_screeninfo is set to safe VESA mode 1212 * monspecs is NULL, and fb_var_screeninfo is set to safe VESA mode
1211 * Returns 0 if successful 1213 * Returns 0 if successful
1212 */ 1214 */
1213 static int dlfb_setup_modes(struct dlfb_data *dev, 1215 static int dlfb_setup_modes(struct dlfb_data *dev,
1214 struct fb_info *info, 1216 struct fb_info *info,
1215 char *default_edid, size_t default_edid_size) 1217 char *default_edid, size_t default_edid_size)
1216 { 1218 {
1217 int i; 1219 int i;
1218 const struct fb_videomode *default_vmode = NULL; 1220 const struct fb_videomode *default_vmode = NULL;
1219 int result = 0; 1221 int result = 0;
1220 char *edid; 1222 char *edid;
1221 int tries = 3; 1223 int tries = 3;
1222 1224
1223 if (info->dev) /* only use mutex if info has been registered */ 1225 if (info->dev) /* only use mutex if info has been registered */
1224 mutex_lock(&info->lock); 1226 mutex_lock(&info->lock);
1225 1227
1226 edid = kmalloc(EDID_LENGTH, GFP_KERNEL); 1228 edid = kmalloc(EDID_LENGTH, GFP_KERNEL);
1227 if (!edid) { 1229 if (!edid) {
1228 result = -ENOMEM; 1230 result = -ENOMEM;
1229 goto error; 1231 goto error;
1230 } 1232 }
1231 1233
1232 fb_destroy_modelist(&info->modelist); 1234 fb_destroy_modelist(&info->modelist);
1233 memset(&info->monspecs, 0, sizeof(info->monspecs)); 1235 memset(&info->monspecs, 0, sizeof(info->monspecs));
1234 1236
1235 /* 1237 /*
1236 * Try to (re)read EDID from hardware first 1238 * Try to (re)read EDID from hardware first
1237 * EDID data may return, but not parse as valid 1239 * EDID data may return, but not parse as valid
1238 * Try again a few times, in case of e.g. analog cable noise 1240 * Try again a few times, in case of e.g. analog cable noise
1239 */ 1241 */
1240 while (tries--) { 1242 while (tries--) {
1241 1243
1242 i = dlfb_get_edid(dev, edid, EDID_LENGTH); 1244 i = dlfb_get_edid(dev, edid, EDID_LENGTH);
1243 1245
1244 if (i >= EDID_LENGTH) 1246 if (i >= EDID_LENGTH)
1245 fb_edid_to_monspecs(edid, &info->monspecs); 1247 fb_edid_to_monspecs(edid, &info->monspecs);
1246 1248
1247 if (info->monspecs.modedb_len > 0) { 1249 if (info->monspecs.modedb_len > 0) {
1248 dev->edid = edid; 1250 dev->edid = edid;
1249 dev->edid_size = i; 1251 dev->edid_size = i;
1250 break; 1252 break;
1251 } 1253 }
1252 } 1254 }
1253 1255
1254 /* If that fails, use a previously returned EDID if available */ 1256 /* If that fails, use a previously returned EDID if available */
1255 if (info->monspecs.modedb_len == 0) { 1257 if (info->monspecs.modedb_len == 0) {
1256 1258
1257 pr_err("Unable to get valid EDID from device/display\n"); 1259 pr_err("Unable to get valid EDID from device/display\n");
1258 1260
1259 if (dev->edid) { 1261 if (dev->edid) {
1260 fb_edid_to_monspecs(dev->edid, &info->monspecs); 1262 fb_edid_to_monspecs(dev->edid, &info->monspecs);
1261 if (info->monspecs.modedb_len > 0) 1263 if (info->monspecs.modedb_len > 0)
1262 pr_err("Using previously queried EDID\n"); 1264 pr_err("Using previously queried EDID\n");
1263 } 1265 }
1264 } 1266 }
1265 1267
1266 /* If that fails, use the default EDID we were handed */ 1268 /* If that fails, use the default EDID we were handed */
1267 if (info->monspecs.modedb_len == 0) { 1269 if (info->monspecs.modedb_len == 0) {
1268 if (default_edid_size >= EDID_LENGTH) { 1270 if (default_edid_size >= EDID_LENGTH) {
1269 fb_edid_to_monspecs(default_edid, &info->monspecs); 1271 fb_edid_to_monspecs(default_edid, &info->monspecs);
1270 if (info->monspecs.modedb_len > 0) { 1272 if (info->monspecs.modedb_len > 0) {
1271 memcpy(edid, default_edid, default_edid_size); 1273 memcpy(edid, default_edid, default_edid_size);
1272 dev->edid = edid; 1274 dev->edid = edid;
1273 dev->edid_size = default_edid_size; 1275 dev->edid_size = default_edid_size;
1274 pr_err("Using default/backup EDID\n"); 1276 pr_err("Using default/backup EDID\n");
1275 } 1277 }
1276 } 1278 }
1277 } 1279 }
1278 1280
1279 /* If we've got modes, let's pick a best default mode */ 1281 /* If we've got modes, let's pick a best default mode */
1280 if (info->monspecs.modedb_len > 0) { 1282 if (info->monspecs.modedb_len > 0) {
1281 1283
1282 for (i = 0; i < info->monspecs.modedb_len; i++) { 1284 for (i = 0; i < info->monspecs.modedb_len; i++) {
1283 if (dlfb_is_valid_mode(&info->monspecs.modedb[i], info)) 1285 if (dlfb_is_valid_mode(&info->monspecs.modedb[i], info))
1284 fb_add_videomode(&info->monspecs.modedb[i], 1286 fb_add_videomode(&info->monspecs.modedb[i],
1285 &info->modelist); 1287 &info->modelist);
1286 else { 1288 else {
1287 if (i == 0) 1289 if (i == 0)
1288 /* if we've removed top/best mode */ 1290 /* if we've removed top/best mode */
1289 info->monspecs.misc 1291 info->monspecs.misc
1290 &= ~FB_MISC_1ST_DETAIL; 1292 &= ~FB_MISC_1ST_DETAIL;
1291 } 1293 }
1292 } 1294 }
1293 1295
1294 default_vmode = fb_find_best_display(&info->monspecs, 1296 default_vmode = fb_find_best_display(&info->monspecs,
1295 &info->modelist); 1297 &info->modelist);
1296 } 1298 }
1297 1299
1298 /* If everything else has failed, fall back to safe default mode */ 1300 /* If everything else has failed, fall back to safe default mode */
1299 if (default_vmode == NULL) { 1301 if (default_vmode == NULL) {
1300 1302
1301 struct fb_videomode fb_vmode = {0}; 1303 struct fb_videomode fb_vmode = {0};
1302 1304
1303 /* 1305 /*
1304 * Add the standard VESA modes to our modelist 1306 * Add the standard VESA modes to our modelist
1305 * Since we don't have EDID, there may be modes that 1307 * Since we don't have EDID, there may be modes that
1306 * overspec monitor and/or are incorrect aspect ratio, etc. 1308 * overspec monitor and/or are incorrect aspect ratio, etc.
1307 * But at least the user has a chance to choose 1309 * But at least the user has a chance to choose
1308 */ 1310 */
1309 for (i = 0; i < VESA_MODEDB_SIZE; i++) { 1311 for (i = 0; i < VESA_MODEDB_SIZE; i++) {
1310 if (dlfb_is_valid_mode((struct fb_videomode *) 1312 if (dlfb_is_valid_mode((struct fb_videomode *)
1311 &vesa_modes[i], info)) 1313 &vesa_modes[i], info))
1312 fb_add_videomode(&vesa_modes[i], 1314 fb_add_videomode(&vesa_modes[i],
1313 &info->modelist); 1315 &info->modelist);
1314 } 1316 }
1315 1317
1316 /* 1318 /*
1317 * default to resolution safe for projectors 1319 * default to resolution safe for projectors
1318 * (since they are most common case without EDID) 1320 * (since they are most common case without EDID)
1319 */ 1321 */
1320 fb_vmode.xres = 800; 1322 fb_vmode.xres = 800;
1321 fb_vmode.yres = 600; 1323 fb_vmode.yres = 600;
1322 fb_vmode.refresh = 60; 1324 fb_vmode.refresh = 60;
1323 default_vmode = fb_find_nearest_mode(&fb_vmode, 1325 default_vmode = fb_find_nearest_mode(&fb_vmode,
1324 &info->modelist); 1326 &info->modelist);
1325 } 1327 }
1326 1328
1327 /* If we have good mode and no active clients*/ 1329 /* If we have good mode and no active clients*/
1328 if ((default_vmode != NULL) && (dev->fb_count == 0)) { 1330 if ((default_vmode != NULL) && (dev->fb_count == 0)) {
1329 1331
1330 fb_videomode_to_var(&info->var, default_vmode); 1332 fb_videomode_to_var(&info->var, default_vmode);
1331 dlfb_var_color_format(&info->var); 1333 dlfb_var_color_format(&info->var);
1332 1334
1333 /* 1335 /*
1334 * with mode size info, we can now alloc our framebuffer. 1336 * with mode size info, we can now alloc our framebuffer.
1335 */ 1337 */
1336 memcpy(&info->fix, &dlfb_fix, sizeof(dlfb_fix)); 1338 memcpy(&info->fix, &dlfb_fix, sizeof(dlfb_fix));
1337 info->fix.line_length = info->var.xres * 1339 info->fix.line_length = info->var.xres *
1338 (info->var.bits_per_pixel / 8); 1340 (info->var.bits_per_pixel / 8);
1339 1341
1340 result = dlfb_realloc_framebuffer(dev, info); 1342 result = dlfb_realloc_framebuffer(dev, info);
1341 1343
1342 } else 1344 } else
1343 result = -EINVAL; 1345 result = -EINVAL;
1344 1346
1345 error: 1347 error:
1346 if (edid && (dev->edid != edid)) 1348 if (edid && (dev->edid != edid))
1347 kfree(edid); 1349 kfree(edid);
1348 1350
1349 if (info->dev) 1351 if (info->dev)
1350 mutex_unlock(&info->lock); 1352 mutex_unlock(&info->lock);
1351 1353
1352 return result; 1354 return result;
1353 } 1355 }
1354 1356
1355 static ssize_t metrics_bytes_rendered_show(struct device *fbdev, 1357 static ssize_t metrics_bytes_rendered_show(struct device *fbdev,
1356 struct device_attribute *a, char *buf) { 1358 struct device_attribute *a, char *buf) {
1357 struct fb_info *fb_info = dev_get_drvdata(fbdev); 1359 struct fb_info *fb_info = dev_get_drvdata(fbdev);
1358 struct dlfb_data *dev = fb_info->par; 1360 struct dlfb_data *dev = fb_info->par;
1359 return snprintf(buf, PAGE_SIZE, "%u\n", 1361 return snprintf(buf, PAGE_SIZE, "%u\n",
1360 atomic_read(&dev->bytes_rendered)); 1362 atomic_read(&dev->bytes_rendered));
1361 } 1363 }
1362 1364
1363 static ssize_t metrics_bytes_identical_show(struct device *fbdev, 1365 static ssize_t metrics_bytes_identical_show(struct device *fbdev,
1364 struct device_attribute *a, char *buf) { 1366 struct device_attribute *a, char *buf) {
1365 struct fb_info *fb_info = dev_get_drvdata(fbdev); 1367 struct fb_info *fb_info = dev_get_drvdata(fbdev);
1366 struct dlfb_data *dev = fb_info->par; 1368 struct dlfb_data *dev = fb_info->par;
1367 return snprintf(buf, PAGE_SIZE, "%u\n", 1369 return snprintf(buf, PAGE_SIZE, "%u\n",
1368 atomic_read(&dev->bytes_identical)); 1370 atomic_read(&dev->bytes_identical));
1369 } 1371 }
1370 1372
1371 static ssize_t metrics_bytes_sent_show(struct device *fbdev, 1373 static ssize_t metrics_bytes_sent_show(struct device *fbdev,
1372 struct device_attribute *a, char *buf) { 1374 struct device_attribute *a, char *buf) {
1373 struct fb_info *fb_info = dev_get_drvdata(fbdev); 1375 struct fb_info *fb_info = dev_get_drvdata(fbdev);
1374 struct dlfb_data *dev = fb_info->par; 1376 struct dlfb_data *dev = fb_info->par;
1375 return snprintf(buf, PAGE_SIZE, "%u\n", 1377 return snprintf(buf, PAGE_SIZE, "%u\n",
1376 atomic_read(&dev->bytes_sent)); 1378 atomic_read(&dev->bytes_sent));
1377 } 1379 }
1378 1380
1379 static ssize_t metrics_cpu_kcycles_used_show(struct device *fbdev, 1381 static ssize_t metrics_cpu_kcycles_used_show(struct device *fbdev,
1380 struct device_attribute *a, char *buf) { 1382 struct device_attribute *a, char *buf) {
1381 struct fb_info *fb_info = dev_get_drvdata(fbdev); 1383 struct fb_info *fb_info = dev_get_drvdata(fbdev);
1382 struct dlfb_data *dev = fb_info->par; 1384 struct dlfb_data *dev = fb_info->par;
1383 return snprintf(buf, PAGE_SIZE, "%u\n", 1385 return snprintf(buf, PAGE_SIZE, "%u\n",
1384 atomic_read(&dev->cpu_kcycles_used)); 1386 atomic_read(&dev->cpu_kcycles_used));
1385 } 1387 }
1386 1388
1387 static ssize_t edid_show( 1389 static ssize_t edid_show(
1388 struct file *filp, 1390 struct file *filp,
1389 struct kobject *kobj, struct bin_attribute *a, 1391 struct kobject *kobj, struct bin_attribute *a,
1390 char *buf, loff_t off, size_t count) { 1392 char *buf, loff_t off, size_t count) {
1391 struct device *fbdev = container_of(kobj, struct device, kobj); 1393 struct device *fbdev = container_of(kobj, struct device, kobj);
1392 struct fb_info *fb_info = dev_get_drvdata(fbdev); 1394 struct fb_info *fb_info = dev_get_drvdata(fbdev);
1393 struct dlfb_data *dev = fb_info->par; 1395 struct dlfb_data *dev = fb_info->par;
1394 1396
1395 if (dev->edid == NULL) 1397 if (dev->edid == NULL)
1396 return 0; 1398 return 0;
1397 1399
1398 if ((off >= dev->edid_size) || (count > dev->edid_size)) 1400 if ((off >= dev->edid_size) || (count > dev->edid_size))
1399 return 0; 1401 return 0;
1400 1402
1401 if (off + count > dev->edid_size) 1403 if (off + count > dev->edid_size)
1402 count = dev->edid_size - off; 1404 count = dev->edid_size - off;
1403 1405
1404 pr_info("sysfs edid copy %p to %p, %d bytes\n", 1406 pr_info("sysfs edid copy %p to %p, %d bytes\n",
1405 dev->edid, buf, (int) count); 1407 dev->edid, buf, (int) count);
1406 1408
1407 memcpy(buf, dev->edid, count); 1409 memcpy(buf, dev->edid, count);
1408 1410
1409 return count; 1411 return count;
1410 } 1412 }
1411 1413
1412 static ssize_t edid_store( 1414 static ssize_t edid_store(
1413 struct file *filp, 1415 struct file *filp,
1414 struct kobject *kobj, struct bin_attribute *a, 1416 struct kobject *kobj, struct bin_attribute *a,
1415 char *src, loff_t src_off, size_t src_size) { 1417 char *src, loff_t src_off, size_t src_size) {
1416 struct device *fbdev = container_of(kobj, struct device, kobj); 1418 struct device *fbdev = container_of(kobj, struct device, kobj);
1417 struct fb_info *fb_info = dev_get_drvdata(fbdev); 1419 struct fb_info *fb_info = dev_get_drvdata(fbdev);
1418 struct dlfb_data *dev = fb_info->par; 1420 struct dlfb_data *dev = fb_info->par;
1419 1421
1420 /* We only support write of entire EDID at once, no offset*/ 1422 /* We only support write of entire EDID at once, no offset*/
1421 if ((src_size != EDID_LENGTH) || (src_off != 0)) 1423 if ((src_size != EDID_LENGTH) || (src_off != 0))
1422 return 0; 1424 return 0;
1423 1425
1424 dlfb_setup_modes(dev, fb_info, src, src_size); 1426 dlfb_setup_modes(dev, fb_info, src, src_size);
1425 1427
1426 if (dev->edid && (memcmp(src, dev->edid, src_size) == 0)) { 1428 if (dev->edid && (memcmp(src, dev->edid, src_size) == 0)) {
1427 pr_info("sysfs written EDID is new default\n"); 1429 pr_info("sysfs written EDID is new default\n");
1428 dlfb_ops_set_par(fb_info); 1430 dlfb_ops_set_par(fb_info);
1429 return src_size; 1431 return src_size;
1430 } else 1432 } else
1431 return 0; 1433 return 0;
1432 } 1434 }
1433 1435
1434 static ssize_t metrics_reset_store(struct device *fbdev, 1436 static ssize_t metrics_reset_store(struct device *fbdev,
1435 struct device_attribute *attr, 1437 struct device_attribute *attr,
1436 const char *buf, size_t count) 1438 const char *buf, size_t count)
1437 { 1439 {
1438 struct fb_info *fb_info = dev_get_drvdata(fbdev); 1440 struct fb_info *fb_info = dev_get_drvdata(fbdev);
1439 struct dlfb_data *dev = fb_info->par; 1441 struct dlfb_data *dev = fb_info->par;
1440 1442
1441 atomic_set(&dev->bytes_rendered, 0); 1443 atomic_set(&dev->bytes_rendered, 0);
1442 atomic_set(&dev->bytes_identical, 0); 1444 atomic_set(&dev->bytes_identical, 0);
1443 atomic_set(&dev->bytes_sent, 0); 1445 atomic_set(&dev->bytes_sent, 0);
1444 atomic_set(&dev->cpu_kcycles_used, 0); 1446 atomic_set(&dev->cpu_kcycles_used, 0);
1445 1447
1446 return count; 1448 return count;
1447 } 1449 }
1448 1450
1449 static struct bin_attribute edid_attr = { 1451 static struct bin_attribute edid_attr = {
1450 .attr.name = "edid", 1452 .attr.name = "edid",
1451 .attr.mode = 0666, 1453 .attr.mode = 0666,
1452 .size = EDID_LENGTH, 1454 .size = EDID_LENGTH,
1453 .read = edid_show, 1455 .read = edid_show,
1454 .write = edid_store 1456 .write = edid_store
1455 }; 1457 };
1456 1458
1457 static struct device_attribute fb_device_attrs[] = { 1459 static struct device_attribute fb_device_attrs[] = {
1458 __ATTR_RO(metrics_bytes_rendered), 1460 __ATTR_RO(metrics_bytes_rendered),
1459 __ATTR_RO(metrics_bytes_identical), 1461 __ATTR_RO(metrics_bytes_identical),
1460 __ATTR_RO(metrics_bytes_sent), 1462 __ATTR_RO(metrics_bytes_sent),
1461 __ATTR_RO(metrics_cpu_kcycles_used), 1463 __ATTR_RO(metrics_cpu_kcycles_used),
1462 __ATTR(metrics_reset, S_IWUSR, NULL, metrics_reset_store), 1464 __ATTR(metrics_reset, S_IWUSR, NULL, metrics_reset_store),
1463 }; 1465 };
1464 1466
1465 /* 1467 /*
1466 * This is necessary before we can communicate with the display controller. 1468 * This is necessary before we can communicate with the display controller.
1467 */ 1469 */
1468 static int dlfb_select_std_channel(struct dlfb_data *dev) 1470 static int dlfb_select_std_channel(struct dlfb_data *dev)
1469 { 1471 {
1470 int ret; 1472 int ret;
1471 u8 set_def_chn[] = { 0x57, 0xCD, 0xDC, 0xA7, 1473 u8 set_def_chn[] = { 0x57, 0xCD, 0xDC, 0xA7,
1472 0x1C, 0x88, 0x5E, 0x15, 1474 0x1C, 0x88, 0x5E, 0x15,
1473 0x60, 0xFE, 0xC6, 0x97, 1475 0x60, 0xFE, 0xC6, 0x97,
1474 0x16, 0x3D, 0x47, 0xF2 }; 1476 0x16, 0x3D, 0x47, 0xF2 };
1475 1477
1476 ret = usb_control_msg(dev->udev, usb_sndctrlpipe(dev->udev, 0), 1478 ret = usb_control_msg(dev->udev, usb_sndctrlpipe(dev->udev, 0),
1477 NR_USB_REQUEST_CHANNEL, 1479 NR_USB_REQUEST_CHANNEL,
1478 (USB_DIR_OUT | USB_TYPE_VENDOR), 0, 0, 1480 (USB_DIR_OUT | USB_TYPE_VENDOR), 0, 0,
1479 set_def_chn, sizeof(set_def_chn), USB_CTRL_SET_TIMEOUT); 1481 set_def_chn, sizeof(set_def_chn), USB_CTRL_SET_TIMEOUT);
1480 return ret; 1482 return ret;
1481 } 1483 }
1482 1484
1483 static int dlfb_parse_vendor_descriptor(struct dlfb_data *dev, 1485 static int dlfb_parse_vendor_descriptor(struct dlfb_data *dev,
1484 struct usb_interface *interface) 1486 struct usb_interface *interface)
1485 { 1487 {
1486 char *desc; 1488 char *desc;
1487 char *buf; 1489 char *buf;
1488 char *desc_end; 1490 char *desc_end;
1489 1491
1490 int total_len = 0; 1492 int total_len = 0;
1491 1493
1492 buf = kzalloc(MAX_VENDOR_DESCRIPTOR_SIZE, GFP_KERNEL); 1494 buf = kzalloc(MAX_VENDOR_DESCRIPTOR_SIZE, GFP_KERNEL);
1493 if (!buf) 1495 if (!buf)
1494 return false; 1496 return false;
1495 desc = buf; 1497 desc = buf;
1496 1498
1497 total_len = usb_get_descriptor(interface_to_usbdev(interface), 1499 total_len = usb_get_descriptor(interface_to_usbdev(interface),
1498 0x5f, /* vendor specific */ 1500 0x5f, /* vendor specific */
1499 0, desc, MAX_VENDOR_DESCRIPTOR_SIZE); 1501 0, desc, MAX_VENDOR_DESCRIPTOR_SIZE);
1500 1502
1501 /* if not found, look in configuration descriptor */ 1503 /* if not found, look in configuration descriptor */
1502 if (total_len < 0) { 1504 if (total_len < 0) {
1503 if (0 == usb_get_extra_descriptor(interface->cur_altsetting, 1505 if (0 == usb_get_extra_descriptor(interface->cur_altsetting,
1504 0x5f, &desc)) 1506 0x5f, &desc))
1505 total_len = (int) desc[0]; 1507 total_len = (int) desc[0];
1506 } 1508 }
1507 1509
1508 if (total_len > 5) { 1510 if (total_len > 5) {
1509 pr_info("vendor descriptor length:%x data:%02x %02x %02x %02x" \ 1511 pr_info("vendor descriptor length:%x data:%02x %02x %02x %02x" \
1510 "%02x %02x %02x %02x %02x %02x %02x\n", 1512 "%02x %02x %02x %02x %02x %02x %02x\n",
1511 total_len, desc[0], 1513 total_len, desc[0],
1512 desc[1], desc[2], desc[3], desc[4], desc[5], desc[6], 1514 desc[1], desc[2], desc[3], desc[4], desc[5], desc[6],
1513 desc[7], desc[8], desc[9], desc[10]); 1515 desc[7], desc[8], desc[9], desc[10]);
1514 1516
1515 if ((desc[0] != total_len) || /* descriptor length */ 1517 if ((desc[0] != total_len) || /* descriptor length */
1516 (desc[1] != 0x5f) || /* vendor descriptor type */ 1518 (desc[1] != 0x5f) || /* vendor descriptor type */
1517 (desc[2] != 0x01) || /* version (2 bytes) */ 1519 (desc[2] != 0x01) || /* version (2 bytes) */
1518 (desc[3] != 0x00) || 1520 (desc[3] != 0x00) ||
1519 (desc[4] != total_len - 2)) /* length after type */ 1521 (desc[4] != total_len - 2)) /* length after type */
1520 goto unrecognized; 1522 goto unrecognized;
1521 1523
1522 desc_end = desc + total_len; 1524 desc_end = desc + total_len;
1523 desc += 5; /* the fixed header we've already parsed */ 1525 desc += 5; /* the fixed header we've already parsed */
1524 1526
1525 while (desc < desc_end) { 1527 while (desc < desc_end) {
1526 u8 length; 1528 u8 length;
1527 u16 key; 1529 u16 key;
1528 1530
1529 key = *((u16 *) desc); 1531 key = *((u16 *) desc);
1530 desc += sizeof(u16); 1532 desc += sizeof(u16);
1531 length = *desc; 1533 length = *desc;
1532 desc++; 1534 desc++;
1533 1535
1534 switch (key) { 1536 switch (key) {
1535 case 0x0200: { /* max_area */ 1537 case 0x0200: { /* max_area */
1536 u32 max_area; 1538 u32 max_area;
1537 max_area = le32_to_cpu(*((u32 *)desc)); 1539 max_area = le32_to_cpu(*((u32 *)desc));
1538 pr_warn("DL chip limited to %d pixel modes\n", 1540 pr_warn("DL chip limited to %d pixel modes\n",
1539 max_area); 1541 max_area);
1540 dev->sku_pixel_limit = max_area; 1542 dev->sku_pixel_limit = max_area;
1541 break; 1543 break;
1542 } 1544 }
1543 default: 1545 default:
1544 break; 1546 break;
1545 } 1547 }
1546 desc += length; 1548 desc += length;
1547 } 1549 }
1548 } else { 1550 } else {
1549 pr_info("vendor descriptor not available (%d)\n", total_len); 1551 pr_info("vendor descriptor not available (%d)\n", total_len);
1550 } 1552 }
1551 1553
1552 goto success; 1554 goto success;
1553 1555
1554 unrecognized: 1556 unrecognized:
1555 /* allow udlfb to load for now even if firmware unrecognized */ 1557 /* allow udlfb to load for now even if firmware unrecognized */
1556 pr_err("Unrecognized vendor firmware descriptor\n"); 1558 pr_err("Unrecognized vendor firmware descriptor\n");
1557 1559
1558 success: 1560 success:
1559 kfree(buf); 1561 kfree(buf);
1560 return true; 1562 return true;
1561 } 1563 }
1562 static int dlfb_usb_probe(struct usb_interface *interface, 1564 static int dlfb_usb_probe(struct usb_interface *interface,
1563 const struct usb_device_id *id) 1565 const struct usb_device_id *id)
1564 { 1566 {
1565 struct usb_device *usbdev; 1567 struct usb_device *usbdev;
1566 struct dlfb_data *dev = 0; 1568 struct dlfb_data *dev = 0;
1567 struct fb_info *info = 0; 1569 struct fb_info *info = 0;
1568 int retval = -ENOMEM; 1570 int retval = -ENOMEM;
1569 int i; 1571 int i;
1570 1572
1571 /* usb initialization */ 1573 /* usb initialization */
1572 1574
1573 usbdev = interface_to_usbdev(interface); 1575 usbdev = interface_to_usbdev(interface);
1574 1576
1575 dev = kzalloc(sizeof(*dev), GFP_KERNEL); 1577 dev = kzalloc(sizeof(*dev), GFP_KERNEL);
1576 if (dev == NULL) { 1578 if (dev == NULL) {
1577 err("dlfb_usb_probe: failed alloc of dev struct\n"); 1579 err("dlfb_usb_probe: failed alloc of dev struct\n");
1578 goto error; 1580 goto error;
1579 } 1581 }
1580 1582
1581 /* we need to wait for both usb and fbdev to spin down on disconnect */ 1583 /* we need to wait for both usb and fbdev to spin down on disconnect */
1582 kref_init(&dev->kref); /* matching kref_put in usb .disconnect fn */ 1584 kref_init(&dev->kref); /* matching kref_put in usb .disconnect fn */
1583 kref_get(&dev->kref); /* matching kref_put in free_framebuffer_work */ 1585 kref_get(&dev->kref); /* matching kref_put in free_framebuffer_work */
1584 1586
1585 dev->udev = usbdev; 1587 dev->udev = usbdev;
1586 dev->gdev = &usbdev->dev; /* our generic struct device * */ 1588 dev->gdev = &usbdev->dev; /* our generic struct device * */
1587 usb_set_intfdata(interface, dev); 1589 usb_set_intfdata(interface, dev);
1588 1590
1589 pr_info("%s %s - serial #%s\n", 1591 pr_info("%s %s - serial #%s\n",
1590 usbdev->manufacturer, usbdev->product, usbdev->serial); 1592 usbdev->manufacturer, usbdev->product, usbdev->serial);
1591 pr_info("vid_%04x&pid_%04x&rev_%04x driver's dlfb_data struct at %p\n", 1593 pr_info("vid_%04x&pid_%04x&rev_%04x driver's dlfb_data struct at %p\n",
1592 usbdev->descriptor.idVendor, usbdev->descriptor.idProduct, 1594 usbdev->descriptor.idVendor, usbdev->descriptor.idProduct,
1593 usbdev->descriptor.bcdDevice, dev); 1595 usbdev->descriptor.bcdDevice, dev);
1594 pr_info("console enable=%d\n", console); 1596 pr_info("console enable=%d\n", console);
1595 pr_info("fb_defio enable=%d\n", fb_defio); 1597 pr_info("fb_defio enable=%d\n", fb_defio);
1598 pr_info("shadow enable=%d\n", shadow);
1596 1599
1597 dev->sku_pixel_limit = 2048 * 1152; /* default to maximum */ 1600 dev->sku_pixel_limit = 2048 * 1152; /* default to maximum */
1598 1601
1599 if (!dlfb_parse_vendor_descriptor(dev, interface)) { 1602 if (!dlfb_parse_vendor_descriptor(dev, interface)) {
1600 pr_err("firmware not recognized. Assume incompatible device\n"); 1603 pr_err("firmware not recognized. Assume incompatible device\n");
1601 goto error; 1604 goto error;
1602 } 1605 }
1603 1606
1604 if (!dlfb_alloc_urb_list(dev, WRITES_IN_FLIGHT, MAX_TRANSFER)) { 1607 if (!dlfb_alloc_urb_list(dev, WRITES_IN_FLIGHT, MAX_TRANSFER)) {
1605 retval = -ENOMEM; 1608 retval = -ENOMEM;
1606 pr_err("dlfb_alloc_urb_list failed\n"); 1609 pr_err("dlfb_alloc_urb_list failed\n");
1607 goto error; 1610 goto error;
1608 } 1611 }
1609 1612
1610 /* We don't register a new USB class. Our client interface is fbdev */ 1613 /* We don't register a new USB class. Our client interface is fbdev */
1611 1614
1612 /* allocates framebuffer driver structure, not framebuffer memory */ 1615 /* allocates framebuffer driver structure, not framebuffer memory */
1613 info = framebuffer_alloc(0, &usbdev->dev); 1616 info = framebuffer_alloc(0, &usbdev->dev);
1614 if (!info) { 1617 if (!info) {
1615 retval = -ENOMEM; 1618 retval = -ENOMEM;
1616 pr_err("framebuffer_alloc failed\n"); 1619 pr_err("framebuffer_alloc failed\n");
1617 goto error; 1620 goto error;
1618 } 1621 }
1619 1622
1620 dev->info = info; 1623 dev->info = info;
1621 info->par = dev; 1624 info->par = dev;
1622 info->pseudo_palette = dev->pseudo_palette; 1625 info->pseudo_palette = dev->pseudo_palette;
1623 info->fbops = &dlfb_ops; 1626 info->fbops = &dlfb_ops;
1624 1627
1625 retval = fb_alloc_cmap(&info->cmap, 256, 0); 1628 retval = fb_alloc_cmap(&info->cmap, 256, 0);
1626 if (retval < 0) { 1629 if (retval < 0) {
1627 pr_err("fb_alloc_cmap failed %x\n", retval); 1630 pr_err("fb_alloc_cmap failed %x\n", retval);
1628 goto error; 1631 goto error;
1629 } 1632 }
1630 1633
1631 INIT_DELAYED_WORK(&dev->free_framebuffer_work, 1634 INIT_DELAYED_WORK(&dev->free_framebuffer_work,
1632 dlfb_free_framebuffer_work); 1635 dlfb_free_framebuffer_work);
1633 1636
1634 INIT_LIST_HEAD(&info->modelist); 1637 INIT_LIST_HEAD(&info->modelist);
1635 1638
1636 retval = dlfb_setup_modes(dev, info, NULL, 0); 1639 retval = dlfb_setup_modes(dev, info, NULL, 0);
1637 if (retval != 0) { 1640 if (retval != 0) {
1638 pr_err("unable to find common mode for display and adapter\n"); 1641 pr_err("unable to find common mode for display and adapter\n");
1639 goto error; 1642 goto error;
1640 } 1643 }
1641 1644
1642 /* ready to begin using device */ 1645 /* ready to begin using device */
1643 1646
1644 atomic_set(&dev->usb_active, 1); 1647 atomic_set(&dev->usb_active, 1);
1645 dlfb_select_std_channel(dev); 1648 dlfb_select_std_channel(dev);
1646 1649
1647 dlfb_ops_check_var(&info->var, info); 1650 dlfb_ops_check_var(&info->var, info);
1648 dlfb_ops_set_par(info); 1651 dlfb_ops_set_par(info);
1649 1652
1650 retval = register_framebuffer(info); 1653 retval = register_framebuffer(info);
1651 if (retval < 0) { 1654 if (retval < 0) {
1652 pr_err("register_framebuffer failed %d\n", retval); 1655 pr_err("register_framebuffer failed %d\n", retval);
1653 goto error; 1656 goto error;
1654 } 1657 }
1655 1658
1656 for (i = 0; i < ARRAY_SIZE(fb_device_attrs); i++) { 1659 for (i = 0; i < ARRAY_SIZE(fb_device_attrs); i++) {
1657 retval = device_create_file(info->dev, &fb_device_attrs[i]); 1660 retval = device_create_file(info->dev, &fb_device_attrs[i]);
1658 if (retval) { 1661 if (retval) {
1659 pr_err("device_create_file failed %d\n", retval); 1662 pr_err("device_create_file failed %d\n", retval);
1660 goto err_del_attrs; 1663 goto err_del_attrs;
1661 } 1664 }
1662 } 1665 }
1663 1666
1664 retval = device_create_bin_file(info->dev, &edid_attr); 1667 retval = device_create_bin_file(info->dev, &edid_attr);
1665 if (retval) { 1668 if (retval) {
1666 pr_err("device_create_bin_file failed %d\n", retval); 1669 pr_err("device_create_bin_file failed %d\n", retval);
1667 goto err_del_attrs; 1670 goto err_del_attrs;
1668 } 1671 }
1669 1672
1670 pr_info("DisplayLink USB device /dev/fb%d attached. %dx%d resolution." 1673 pr_info("DisplayLink USB device /dev/fb%d attached. %dx%d resolution."
1671 " Using %dK framebuffer memory\n", info->node, 1674 " Using %dK framebuffer memory\n", info->node,
1672 info->var.xres, info->var.yres, 1675 info->var.xres, info->var.yres,
1673 ((dev->backing_buffer) ? 1676 ((dev->backing_buffer) ?
1674 info->fix.smem_len * 2 : info->fix.smem_len) >> 10); 1677 info->fix.smem_len * 2 : info->fix.smem_len) >> 10);
1675 return 0; 1678 return 0;
1676 1679
1677 err_del_attrs: 1680 err_del_attrs:
1678 for (i -= 1; i >= 0; i--) 1681 for (i -= 1; i >= 0; i--)
1679 device_remove_file(info->dev, &fb_device_attrs[i]); 1682 device_remove_file(info->dev, &fb_device_attrs[i]);
1680 1683
1681 error: 1684 error:
1682 if (dev) { 1685 if (dev) {
1683 1686
1684 if (info) { 1687 if (info) {
1685 if (info->cmap.len != 0) 1688 if (info->cmap.len != 0)
1686 fb_dealloc_cmap(&info->cmap); 1689 fb_dealloc_cmap(&info->cmap);
1687 if (info->monspecs.modedb) 1690 if (info->monspecs.modedb)
1688 fb_destroy_modedb(info->monspecs.modedb); 1691 fb_destroy_modedb(info->monspecs.modedb);
1689 if (info->screen_base) 1692 if (info->screen_base)
1690 vfree(info->screen_base); 1693 vfree(info->screen_base);
1691 1694
1692 fb_destroy_modelist(&info->modelist); 1695 fb_destroy_modelist(&info->modelist);
1693 1696
1694 framebuffer_release(info); 1697 framebuffer_release(info);
1695 } 1698 }
1696 1699
1697 if (dev->backing_buffer) 1700 if (dev->backing_buffer)
1698 vfree(dev->backing_buffer); 1701 vfree(dev->backing_buffer);
1699 1702
1700 kref_put(&dev->kref, dlfb_free); /* ref for framebuffer */ 1703 kref_put(&dev->kref, dlfb_free); /* ref for framebuffer */
1701 kref_put(&dev->kref, dlfb_free); /* last ref from kref_init */ 1704 kref_put(&dev->kref, dlfb_free); /* last ref from kref_init */
1702 1705
1703 /* dev has been deallocated. Do not dereference */ 1706 /* dev has been deallocated. Do not dereference */
1704 } 1707 }
1705 1708
1706 return retval; 1709 return retval;
1707 } 1710 }
1708 1711
1709 static void dlfb_usb_disconnect(struct usb_interface *interface) 1712 static void dlfb_usb_disconnect(struct usb_interface *interface)
1710 { 1713 {
1711 struct dlfb_data *dev; 1714 struct dlfb_data *dev;
1712 struct fb_info *info; 1715 struct fb_info *info;
1713 int i; 1716 int i;
1714 1717
1715 dev = usb_get_intfdata(interface); 1718 dev = usb_get_intfdata(interface);
1716 info = dev->info; 1719 info = dev->info;
1717 1720
1718 pr_info("USB disconnect starting\n"); 1721 pr_info("USB disconnect starting\n");
1719 1722
1720 /* we virtualize until all fb clients release. Then we free */ 1723 /* we virtualize until all fb clients release. Then we free */
1721 dev->virtualized = true; 1724 dev->virtualized = true;
1722 1725
1723 /* When non-active we'll update virtual framebuffer, but no new urbs */ 1726 /* When non-active we'll update virtual framebuffer, but no new urbs */
1724 atomic_set(&dev->usb_active, 0); 1727 atomic_set(&dev->usb_active, 0);
1725 1728
1726 /* remove udlfb's sysfs interfaces */ 1729 /* remove udlfb's sysfs interfaces */
1727 for (i = 0; i < ARRAY_SIZE(fb_device_attrs); i++) 1730 for (i = 0; i < ARRAY_SIZE(fb_device_attrs); i++)
1728 device_remove_file(info->dev, &fb_device_attrs[i]); 1731 device_remove_file(info->dev, &fb_device_attrs[i]);
1729 device_remove_bin_file(info->dev, &edid_attr); 1732 device_remove_bin_file(info->dev, &edid_attr);
1730 1733
1731 usb_set_intfdata(interface, NULL); 1734 usb_set_intfdata(interface, NULL);
1732 1735
1733 /* if clients still have us open, will be freed on last close */ 1736 /* if clients still have us open, will be freed on last close */
1734 if (dev->fb_count == 0) 1737 if (dev->fb_count == 0)
1735 schedule_delayed_work(&dev->free_framebuffer_work, 0); 1738 schedule_delayed_work(&dev->free_framebuffer_work, 0);
1736 1739
1737 /* release reference taken by kref_init in probe() */ 1740 /* release reference taken by kref_init in probe() */
1738 kref_put(&dev->kref, dlfb_free); 1741 kref_put(&dev->kref, dlfb_free);
1739 1742
1740 /* consider dlfb_data freed */ 1743 /* consider dlfb_data freed */
1741 1744
1742 return; 1745 return;
1743 } 1746 }
1744 1747
1745 static struct usb_driver dlfb_driver = { 1748 static struct usb_driver dlfb_driver = {
1746 .name = "udlfb", 1749 .name = "udlfb",
1747 .probe = dlfb_usb_probe, 1750 .probe = dlfb_usb_probe,
1748 .disconnect = dlfb_usb_disconnect, 1751 .disconnect = dlfb_usb_disconnect,
1749 .id_table = id_table, 1752 .id_table = id_table,
1750 }; 1753 };
1751 1754
1752 static int __init dlfb_module_init(void) 1755 static int __init dlfb_module_init(void)
1753 { 1756 {
1754 int res; 1757 int res;
1755 1758
1756 res = usb_register(&dlfb_driver); 1759 res = usb_register(&dlfb_driver);
1757 if (res) 1760 if (res)
1758 err("usb_register failed. Error number %d", res); 1761 err("usb_register failed. Error number %d", res);
1759 1762
1760 return res; 1763 return res;
1761 } 1764 }
1762 1765
1763 static void __exit dlfb_module_exit(void) 1766 static void __exit dlfb_module_exit(void)
1764 { 1767 {
1765 usb_deregister(&dlfb_driver); 1768 usb_deregister(&dlfb_driver);
1766 } 1769 }
1767 1770
1768 module_init(dlfb_module_init); 1771 module_init(dlfb_module_init);
1769 module_exit(dlfb_module_exit); 1772 module_exit(dlfb_module_exit);
1770 1773
1771 static void dlfb_urb_completion(struct urb *urb) 1774 static void dlfb_urb_completion(struct urb *urb)
1772 { 1775 {
1773 struct urb_node *unode = urb->context; 1776 struct urb_node *unode = urb->context;
1774 struct dlfb_data *dev = unode->dev; 1777 struct dlfb_data *dev = unode->dev;
1775 unsigned long flags; 1778 unsigned long flags;
1776 1779
1777 /* sync/async unlink faults aren't errors */ 1780 /* sync/async unlink faults aren't errors */
1778 if (urb->status) { 1781 if (urb->status) {
1779 if (!(urb->status == -ENOENT || 1782 if (!(urb->status == -ENOENT ||
1780 urb->status == -ECONNRESET || 1783 urb->status == -ECONNRESET ||
1781 urb->status == -ESHUTDOWN)) { 1784 urb->status == -ESHUTDOWN)) {
1782 pr_err("%s - nonzero write bulk status received: %d\n", 1785 pr_err("%s - nonzero write bulk status received: %d\n",
1783 __func__, urb->status); 1786 __func__, urb->status);
1784 atomic_set(&dev->lost_pixels, 1); 1787 atomic_set(&dev->lost_pixels, 1);
1785 } 1788 }
1786 } 1789 }
1787 1790
1788 urb->transfer_buffer_length = dev->urbs.size; /* reset to actual */ 1791 urb->transfer_buffer_length = dev->urbs.size; /* reset to actual */
1789 1792
1790 spin_lock_irqsave(&dev->urbs.lock, flags); 1793 spin_lock_irqsave(&dev->urbs.lock, flags);
1791 list_add_tail(&unode->entry, &dev->urbs.list); 1794 list_add_tail(&unode->entry, &dev->urbs.list);
1792 dev->urbs.available++; 1795 dev->urbs.available++;
1793 spin_unlock_irqrestore(&dev->urbs.lock, flags); 1796 spin_unlock_irqrestore(&dev->urbs.lock, flags);
1794 1797
1795 /* 1798 /*
1796 * When using fb_defio, we deadlock if up() is called 1799 * When using fb_defio, we deadlock if up() is called
1797 * while another is waiting. So queue to another process. 1800 * while another is waiting. So queue to another process.
1798 */ 1801 */
1799 if (fb_defio) 1802 if (fb_defio)
1800 schedule_delayed_work(&unode->release_urb_work, 0); 1803 schedule_delayed_work(&unode->release_urb_work, 0);
1801 else 1804 else
1802 up(&dev->urbs.limit_sem); 1805 up(&dev->urbs.limit_sem);
1803 } 1806 }
1804 1807
1805 static void dlfb_free_urb_list(struct dlfb_data *dev) 1808 static void dlfb_free_urb_list(struct dlfb_data *dev)
1806 { 1809 {
1807 int count = dev->urbs.count; 1810 int count = dev->urbs.count;
1808 struct list_head *node; 1811 struct list_head *node;
1809 struct urb_node *unode; 1812 struct urb_node *unode;
1810 struct urb *urb; 1813 struct urb *urb;
1811 int ret; 1814 int ret;
1812 unsigned long flags; 1815 unsigned long flags;
1813 1816
1814 pr_notice("Waiting for completes and freeing all render urbs\n"); 1817 pr_notice("Waiting for completes and freeing all render urbs\n");
1815 1818
1816 /* keep waiting and freeing, until we've got 'em all */ 1819 /* keep waiting and freeing, until we've got 'em all */
1817 while (count--) { 1820 while (count--) {
1818 1821
1819 /* Getting interrupted means a leak, but ok at shutdown*/ 1822 /* Getting interrupted means a leak, but ok at shutdown*/
1820 ret = down_interruptible(&dev->urbs.limit_sem); 1823 ret = down_interruptible(&dev->urbs.limit_sem);
1821 if (ret) 1824 if (ret)
1822 break; 1825 break;
1823 1826
1824 spin_lock_irqsave(&dev->urbs.lock, flags); 1827 spin_lock_irqsave(&dev->urbs.lock, flags);
1825 1828
1826 node = dev->urbs.list.next; /* have reserved one with sem */ 1829 node = dev->urbs.list.next; /* have reserved one with sem */
1827 list_del_init(node); 1830 list_del_init(node);
1828 1831
1829 spin_unlock_irqrestore(&dev->urbs.lock, flags); 1832 spin_unlock_irqrestore(&dev->urbs.lock, flags);
1830 1833
1831 unode = list_entry(node, struct urb_node, entry); 1834 unode = list_entry(node, struct urb_node, entry);
1832 urb = unode->urb; 1835 urb = unode->urb;
1833 1836
1834 /* Free each separately allocated piece */ 1837 /* Free each separately allocated piece */
1835 usb_free_coherent(urb->dev, dev->urbs.size, 1838 usb_free_coherent(urb->dev, dev->urbs.size,
1836 urb->transfer_buffer, urb->transfer_dma); 1839 urb->transfer_buffer, urb->transfer_dma);
1837 usb_free_urb(urb); 1840 usb_free_urb(urb);
1838 kfree(node); 1841 kfree(node);
1839 } 1842 }
1840 1843
1841 } 1844 }
1842 1845
1843 static int dlfb_alloc_urb_list(struct dlfb_data *dev, int count, size_t size) 1846 static int dlfb_alloc_urb_list(struct dlfb_data *dev, int count, size_t size)
1844 { 1847 {
1845 int i = 0; 1848 int i = 0;
1846 struct urb *urb; 1849 struct urb *urb;
1847 struct urb_node *unode; 1850 struct urb_node *unode;
1848 char *buf; 1851 char *buf;
1849 1852
1850 spin_lock_init(&dev->urbs.lock); 1853 spin_lock_init(&dev->urbs.lock);
1851 1854
1852 dev->urbs.size = size; 1855 dev->urbs.size = size;
1853 INIT_LIST_HEAD(&dev->urbs.list); 1856 INIT_LIST_HEAD(&dev->urbs.list);
1854 1857
1855 while (i < count) { 1858 while (i < count) {
1856 unode = kzalloc(sizeof(struct urb_node), GFP_KERNEL); 1859 unode = kzalloc(sizeof(struct urb_node), GFP_KERNEL);
1857 if (!unode) 1860 if (!unode)
1858 break; 1861 break;
1859 unode->dev = dev; 1862 unode->dev = dev;
1860 1863
1861 INIT_DELAYED_WORK(&unode->release_urb_work, 1864 INIT_DELAYED_WORK(&unode->release_urb_work,
1862 dlfb_release_urb_work); 1865 dlfb_release_urb_work);
1863 1866
1864 urb = usb_alloc_urb(0, GFP_KERNEL); 1867 urb = usb_alloc_urb(0, GFP_KERNEL);
1865 if (!urb) { 1868 if (!urb) {
1866 kfree(unode); 1869 kfree(unode);
1867 break; 1870 break;
1868 } 1871 }
1869 unode->urb = urb; 1872 unode->urb = urb;
1870 1873
1871 buf = usb_alloc_coherent(dev->udev, MAX_TRANSFER, GFP_KERNEL, 1874 buf = usb_alloc_coherent(dev->udev, MAX_TRANSFER, GFP_KERNEL,
1872 &urb->transfer_dma); 1875 &urb->transfer_dma);
1873 if (!buf) { 1876 if (!buf) {
1874 kfree(unode); 1877 kfree(unode);
1875 usb_free_urb(urb); 1878 usb_free_urb(urb);
1876 break; 1879 break;
1877 } 1880 }
1878 1881
1879 /* urb->transfer_buffer_length set to actual before submit */ 1882 /* urb->transfer_buffer_length set to actual before submit */
1880 usb_fill_bulk_urb(urb, dev->udev, usb_sndbulkpipe(dev->udev, 1), 1883 usb_fill_bulk_urb(urb, dev->udev, usb_sndbulkpipe(dev->udev, 1),
1881 buf, size, dlfb_urb_completion, unode); 1884 buf, size, dlfb_urb_completion, unode);
1882 urb->transfer_flags |= URB_NO_TRANSFER_DMA_MAP; 1885 urb->transfer_flags |= URB_NO_TRANSFER_DMA_MAP;
1883 1886
1884 list_add_tail(&unode->entry, &dev->urbs.list); 1887 list_add_tail(&unode->entry, &dev->urbs.list);
1885 1888
1886 i++; 1889 i++;
1887 } 1890 }
1888 1891
1889 sema_init(&dev->urbs.limit_sem, i); 1892 sema_init(&dev->urbs.limit_sem, i);
1890 dev->urbs.count = i; 1893 dev->urbs.count = i;
1891 dev->urbs.available = i; 1894 dev->urbs.available = i;
1892 1895
1893 pr_notice("allocated %d %d byte urbs\n", i, (int) size); 1896 pr_notice("allocated %d %d byte urbs\n", i, (int) size);
1894 1897
1895 return i; 1898 return i;
1896 } 1899 }
1897 1900
1898 static struct urb *dlfb_get_urb(struct dlfb_data *dev) 1901 static struct urb *dlfb_get_urb(struct dlfb_data *dev)
1899 { 1902 {
1900 int ret = 0; 1903 int ret = 0;
1901 struct list_head *entry; 1904 struct list_head *entry;
1902 struct urb_node *unode; 1905 struct urb_node *unode;
1903 struct urb *urb = NULL; 1906 struct urb *urb = NULL;
1904 unsigned long flags; 1907 unsigned long flags;
1905 1908
1906 /* Wait for an in-flight buffer to complete and get re-queued */ 1909 /* Wait for an in-flight buffer to complete and get re-queued */
1907 ret = down_timeout(&dev->urbs.limit_sem, GET_URB_TIMEOUT); 1910 ret = down_timeout(&dev->urbs.limit_sem, GET_URB_TIMEOUT);
1908 if (ret) { 1911 if (ret) {
1909 atomic_set(&dev->lost_pixels, 1); 1912 atomic_set(&dev->lost_pixels, 1);
1910 pr_warn("wait for urb interrupted: %x available: %d\n", 1913 pr_warn("wait for urb interrupted: %x available: %d\n",
1911 ret, dev->urbs.available); 1914 ret, dev->urbs.available);
1912 goto error; 1915 goto error;
1913 } 1916 }
1914 1917
1915 spin_lock_irqsave(&dev->urbs.lock, flags); 1918 spin_lock_irqsave(&dev->urbs.lock, flags);
1916 1919
1917 BUG_ON(list_empty(&dev->urbs.list)); /* reserved one with limit_sem */ 1920 BUG_ON(list_empty(&dev->urbs.list)); /* reserved one with limit_sem */
1918 entry = dev->urbs.list.next; 1921 entry = dev->urbs.list.next;
1919 list_del_init(entry); 1922 list_del_init(entry);
1920 dev->urbs.available--; 1923 dev->urbs.available--;
1921 1924
1922 spin_unlock_irqrestore(&dev->urbs.lock, flags); 1925 spin_unlock_irqrestore(&dev->urbs.lock, flags);
1923 1926
1924 unode = list_entry(entry, struct urb_node, entry); 1927 unode = list_entry(entry, struct urb_node, entry);
1925 urb = unode->urb; 1928 urb = unode->urb;
1926 1929
1927 error: 1930 error:
1928 return urb; 1931 return urb;
1929 } 1932 }
1930 1933
1931 static int dlfb_submit_urb(struct dlfb_data *dev, struct urb *urb, size_t len) 1934 static int dlfb_submit_urb(struct dlfb_data *dev, struct urb *urb, size_t len)
1932 { 1935 {
1933 int ret; 1936 int ret;
1934 1937
1935 BUG_ON(len > dev->urbs.size); 1938 BUG_ON(len > dev->urbs.size);
1936 1939
1937 urb->transfer_buffer_length = len; /* set to actual payload len */ 1940 urb->transfer_buffer_length = len; /* set to actual payload len */
1938 ret = usb_submit_urb(urb, GFP_KERNEL); 1941 ret = usb_submit_urb(urb, GFP_KERNEL);
1939 if (ret) { 1942 if (ret) {
1940 dlfb_urb_completion(urb); /* because no one else will */ 1943 dlfb_urb_completion(urb); /* because no one else will */
1941 atomic_set(&dev->lost_pixels, 1); 1944 atomic_set(&dev->lost_pixels, 1);
1942 pr_err("usb_submit_urb error %x\n", ret); 1945 pr_err("usb_submit_urb error %x\n", ret);
1943 } 1946 }
1944 return ret; 1947 return ret;
1945 } 1948 }
1946 1949
1947 module_param(console, bool, S_IWUSR | S_IRUSR | S_IWGRP | S_IRGRP); 1950 module_param(console, bool, S_IWUSR | S_IRUSR | S_IWGRP | S_IRGRP);
1948 MODULE_PARM_DESC(console, "Allow fbcon to consume first framebuffer found"); 1951 MODULE_PARM_DESC(console, "Allow fbcon to consume first framebuffer found");
1949 1952
1950 module_param(fb_defio, bool, S_IWUSR | S_IRUSR | S_IWGRP | S_IRGRP); 1953 module_param(fb_defio, bool, S_IWUSR | S_IRUSR | S_IWGRP | S_IRGRP);
1951 MODULE_PARM_DESC(fb_defio, "Enable fb_defio mmap support. *Experimental*"); 1954 MODULE_PARM_DESC(fb_defio, "Enable fb_defio mmap support. *Experimental*");
1955
1956 module_param(shadow, bool, S_IWUSR | S_IRUSR | S_IWGRP | S_IRGRP);
1957 MODULE_PARM_DESC(shadow, "Shadow vid mem. Disable to save mem but lose perf");
1952 1958
1953 MODULE_AUTHOR("Roberto De Ioris <roberto@unbit.it>, " 1959 MODULE_AUTHOR("Roberto De Ioris <roberto@unbit.it>, "
1954 "Jaya Kumar <jayakumar.lkml@gmail.com>, " 1960 "Jaya Kumar <jayakumar.lkml@gmail.com>, "
1955 "Bernie Thompson <bernie@plugable.com>"); 1961 "Bernie Thompson <bernie@plugable.com>");
1956 MODULE_DESCRIPTION("DisplayLink kernel framebuffer driver"); 1962 MODULE_DESCRIPTION("DisplayLink kernel framebuffer driver");
1957 MODULE_LICENSE("GPL"); 1963 MODULE_LICENSE("GPL");
1958 1964
1959 1965