Commit 246da26d37311cd1b1489575f305042dcdecfd50

Authored by Afzal Mohammed
1 parent 559d94b00c

ARM: OMAP2+: gpmc: generic timing calculation

Presently there are three peripherals that gets it timing
by runtime calculation. Those peripherals can work with
frequency scaling that affects gpmc clock. But timing
calculation for them are in different ways.

Here a generic runtime calculation method is proposed. Input
to this function were selected so that they represent timing
variables that are present in peripheral datasheets. Motive
behind this was to achieve DT bindings for the inputs as is.
Even though a few of the tusb6010 timings could not be made
directly related to timings normally found on peripherals,
expressions used were translated to those that could be
justified.

There are possibilities of improving the calculations, like
calculating timing for read & write operations in a more
similar way. Expressions derived here were tested for async
onenand on omap3evm (as vanilla Kernel does not have omap3evm
onenand support, local patch was used). Other peripherals,
tusb6010, smc91x calculations were validated by simulating
on omap3evm.

Regarding "we_on" for onenand async, it was found that even
for muxed address/data, it need not be greater than
"adv_wr_off", but rather could be derived from write setup
time for peripheral from start of access time, hence would
more be in line with peripheral timings. With this method
it was working fine. If it is required in some cases to
have "we_on" same as "wr_data_mux_bus" (i.e. greater than
"adv_wr_off"), another variable could be added to indicate
it. But such a requirement is not expected though.

It has been observed that "adv_rd_off" & "adv_wr_off" are
currently calculated by adding an offset over "oe_on" and
"we_on" respectively in the case of smc91x. But peripheral
datasheet does not specify so and so "adv_rd(wr)_off" has
been derived (to be specific, made ignorant of "oe_on" and
"we_on") observing datasheet rather than adding an offset.
Hence this generic routine is expected to work for smc91x
(91C96 RX51 board). This was verified on smsc911x (9220 on
OMAP3EVM) - a similar ethernet controller.

Timings are calculated in ps to prevent rounding errors and
converted to ns at final stage so that these values can be
directly fed to gpmc_cs_set_timings(). gpmc_cs_set_timings()
would be modified to take ps once all custom timing routines
are replaced by the generic routine, at the same time
generic timing routine would be modified to provide timings
in ps. struct gpmc_timings field types are upgraded from
u16 => u32 so that it can hold ps values.

Whole of this exercise is being done to achieve driver and
DT conversion. If timings could not be calculated in a
peripheral agnostic way, either gpmc driver would have to
be peripheral gnostic or a wrapper arrangement over gpmc
driver would be required.

Signed-off-by: Afzal Mohammed <afzal@ti.com>

Showing 3 changed files with 529 additions and 20 deletions Side-by-side Diff

Documentation/bus-devices/ti-gpmc.txt
  1 +GPMC (General Purpose Memory Controller):
  2 +=========================================
  3 +
  4 +GPMC is an unified memory controller dedicated to interfacing external
  5 +memory devices like
  6 + * Asynchronous SRAM like memories and application specific integrated
  7 + circuit devices.
  8 + * Asynchronous, synchronous, and page mode burst NOR flash devices
  9 + NAND flash
  10 + * Pseudo-SRAM devices
  11 +
  12 +GPMC is found on Texas Instruments SoC's (OMAP based)
  13 +IP details: http://www.ti.com/lit/pdf/spruh73 section 7.1
  14 +
  15 +
  16 +GPMC generic timing calculation:
  17 +================================
  18 +
  19 +GPMC has certain timings that has to be programmed for proper
  20 +functioning of the peripheral, while peripheral has another set of
  21 +timings. To have peripheral work with gpmc, peripheral timings has to
  22 +be translated to the form gpmc can understand. The way it has to be
  23 +translated depends on the connected peripheral. Also there is a
  24 +dependency for certain gpmc timings on gpmc clock frequency. Hence a
  25 +generic timing routine was developed to achieve above requirements.
  26 +
  27 +Generic routine provides a generic method to calculate gpmc timings
  28 +from gpmc peripheral timings. struct gpmc_device_timings fields has to
  29 +be updated with timings from the datasheet of the peripheral that is
  30 +connected to gpmc. A few of the peripheral timings can be fed either
  31 +in time or in cycles, provision to handle this scenario has been
  32 +provided (refer struct gpmc_device_timings definition). It may so
  33 +happen that timing as specified by peripheral datasheet is not present
  34 +in timing structure, in this scenario, try to correlate peripheral
  35 +timing to the one available. If that doesn't work, try to add a new
  36 +field as required by peripheral, educate generic timing routine to
  37 +handle it, make sure that it does not break any of the existing.
  38 +Then there may be cases where peripheral datasheet doesn't mention
  39 +certain fields of struct gpmc_device_timings, zero those entries.
  40 +
  41 +Generic timing routine has been verified to work properly on
  42 +multiple onenand's and tusb6010 peripherals.
  43 +
  44 +A word of caution: generic timing routine has been developed based
  45 +on understanding of gpmc timings, peripheral timings, available
  46 +custom timing routines, a kind of reverse engineering without
  47 +most of the datasheets & hardware (to be exact none of those supported
  48 +in mainline having custom timing routine) and by simulation.
  49 +
  50 +gpmc timing dependency on peripheral timings:
  51 +[<gpmc_timing>: <peripheral timing1>, <peripheral timing2> ...]
  52 +
  53 +1. common
  54 +cs_on: t_ceasu
  55 +adv_on: t_avdasu, t_ceavd
  56 +
  57 +2. sync common
  58 +sync_clk: clk
  59 +page_burst_access: t_bacc
  60 +clk_activation: t_ces, t_avds
  61 +
  62 +3. read async muxed
  63 +adv_rd_off: t_avdp_r
  64 +oe_on: t_oeasu, t_aavdh
  65 +access: t_iaa, t_oe, t_ce, t_aa
  66 +rd_cycle: t_rd_cycle, t_cez_r, t_oez
  67 +
  68 +4. read async non-muxed
  69 +adv_rd_off: t_avdp_r
  70 +oe_on: t_oeasu
  71 +access: t_iaa, t_oe, t_ce, t_aa
  72 +rd_cycle: t_rd_cycle, t_cez_r, t_oez
  73 +
  74 +5. read sync muxed
  75 +adv_rd_off: t_avdp_r, t_avdh
  76 +oe_on: t_oeasu, t_ach, cyc_aavdh_oe
  77 +access: t_iaa, cyc_iaa, cyc_oe
  78 +rd_cycle: t_cez_r, t_oez, t_ce_rdyz
  79 +
  80 +6. read sync non-muxed
  81 +adv_rd_off: t_avdp_r
  82 +oe_on: t_oeasu
  83 +access: t_iaa, cyc_iaa, cyc_oe
  84 +rd_cycle: t_cez_r, t_oez, t_ce_rdyz
  85 +
  86 +7. write async muxed
  87 +adv_wr_off: t_avdp_w
  88 +we_on, wr_data_mux_bus: t_weasu, t_aavdh, cyc_aavhd_we
  89 +we_off: t_wpl
  90 +cs_wr_off: t_wph
  91 +wr_cycle: t_cez_w, t_wr_cycle
  92 +
  93 +8. write async non-muxed
  94 +adv_wr_off: t_avdp_w
  95 +we_on, wr_data_mux_bus: t_weasu
  96 +we_off: t_wpl
  97 +cs_wr_off: t_wph
  98 +wr_cycle: t_cez_w, t_wr_cycle
  99 +
  100 +9. write sync muxed
  101 +adv_wr_off: t_avdp_w, t_avdh
  102 +we_on, wr_data_mux_bus: t_weasu, t_rdyo, t_aavdh, cyc_aavhd_we
  103 +we_off: t_wpl, cyc_wpl
  104 +cs_wr_off: t_wph
  105 +wr_cycle: t_cez_w, t_ce_rdyz
  106 +
  107 +10. write sync non-muxed
  108 +adv_wr_off: t_avdp_w
  109 +we_on, wr_data_mux_bus: t_weasu, t_rdyo
  110 +we_off: t_wpl, cyc_wpl
  111 +cs_wr_off: t_wph
  112 +wr_cycle: t_cez_w, t_ce_rdyz
  113 +
  114 +
  115 +Note: Many of gpmc timings are dependent on other gpmc timings (a few
  116 +gpmc timings purely dependent on other gpmc timings, a reason that
  117 +some of the gpmc timings are missing above), and it will result in
  118 +indirect dependency of peripheral timings to gpmc timings other than
  119 +mentioned above, refer timing routine for more details. To know what
  120 +these peripheral timings correspond to, please see explanations in
  121 +struct gpmc_device_timings definition. And for gpmc timings refer
  122 +IP details (link above).
arch/arm/mach-omap2/gpmc.c
... ... @@ -230,6 +230,18 @@
230 230 return ticks * gpmc_get_fclk_period() / 1000;
231 231 }
232 232  
  233 +static unsigned int gpmc_ticks_to_ps(unsigned int ticks)
  234 +{
  235 + return ticks * gpmc_get_fclk_period();
  236 +}
  237 +
  238 +static unsigned int gpmc_round_ps_to_ticks(unsigned int time_ps)
  239 +{
  240 + unsigned long ticks = gpmc_ps_to_ticks(time_ps);
  241 +
  242 + return ticks * gpmc_get_fclk_period();
  243 +}
  244 +
233 245 static inline void gpmc_cs_modify_reg(int cs, int reg, u32 mask, bool value)
234 246 {
235 247 u32 l;
... ... @@ -792,6 +804,319 @@
792 804 return rc;
793 805 }
794 806 }
  807 +
  808 + return 0;
  809 +}
  810 +
  811 +static u32 gpmc_round_ps_to_sync_clk(u32 time_ps, u32 sync_clk)
  812 +{
  813 + u32 temp;
  814 + int div;
  815 +
  816 + div = gpmc_calc_divider(sync_clk);
  817 + temp = gpmc_ps_to_ticks(time_ps);
  818 + temp = (temp + div - 1) / div;
  819 + return gpmc_ticks_to_ps(temp * div);
  820 +}
  821 +
  822 +/* XXX: can the cycles be avoided ? */
  823 +static int gpmc_calc_sync_read_timings(struct gpmc_timings *gpmc_t,
  824 + struct gpmc_device_timings *dev_t)
  825 +{
  826 + bool mux = dev_t->mux;
  827 + u32 temp;
  828 +
  829 + /* adv_rd_off */
  830 + temp = dev_t->t_avdp_r;
  831 + /* XXX: mux check required ? */
  832 + if (mux) {
  833 + /* XXX: t_avdp not to be required for sync, only added for tusb
  834 + * this indirectly necessitates requirement of t_avdp_r and
  835 + * t_avdp_w instead of having a single t_avdp
  836 + */
  837 + temp = max_t(u32, temp, gpmc_t->clk_activation + dev_t->t_avdh);
  838 + temp = max_t(u32, gpmc_t->adv_on + gpmc_ticks_to_ps(1), temp);
  839 + }
  840 + gpmc_t->adv_rd_off = gpmc_round_ps_to_ticks(temp);
  841 +
  842 + /* oe_on */
  843 + temp = dev_t->t_oeasu; /* XXX: remove this ? */
  844 + if (mux) {
  845 + temp = max_t(u32, temp, gpmc_t->clk_activation + dev_t->t_ach);
  846 + temp = max_t(u32, temp, gpmc_t->adv_rd_off +
  847 + gpmc_ticks_to_ps(dev_t->cyc_aavdh_oe));
  848 + }
  849 + gpmc_t->oe_on = gpmc_round_ps_to_ticks(temp);
  850 +
  851 + /* access */
  852 + /* XXX: any scope for improvement ?, by combining oe_on
  853 + * and clk_activation, need to check whether
  854 + * access = clk_activation + round to sync clk ?
  855 + */
  856 + temp = max_t(u32, dev_t->t_iaa, dev_t->cyc_iaa * gpmc_t->sync_clk);
  857 + temp += gpmc_t->clk_activation;
  858 + if (dev_t->cyc_oe)
  859 + temp = max_t(u32, temp, gpmc_t->oe_on +
  860 + gpmc_ticks_to_ps(dev_t->cyc_oe));
  861 + gpmc_t->access = gpmc_round_ps_to_ticks(temp);
  862 +
  863 + gpmc_t->oe_off = gpmc_t->access + gpmc_ticks_to_ps(1);
  864 + gpmc_t->cs_rd_off = gpmc_t->oe_off;
  865 +
  866 + /* rd_cycle */
  867 + temp = max_t(u32, dev_t->t_cez_r, dev_t->t_oez);
  868 + temp = gpmc_round_ps_to_sync_clk(temp, gpmc_t->sync_clk) +
  869 + gpmc_t->access;
  870 + /* XXX: barter t_ce_rdyz with t_cez_r ? */
  871 + if (dev_t->t_ce_rdyz)
  872 + temp = max_t(u32, temp, gpmc_t->cs_rd_off + dev_t->t_ce_rdyz);
  873 + gpmc_t->rd_cycle = gpmc_round_ps_to_ticks(temp);
  874 +
  875 + return 0;
  876 +}
  877 +
  878 +static int gpmc_calc_sync_write_timings(struct gpmc_timings *gpmc_t,
  879 + struct gpmc_device_timings *dev_t)
  880 +{
  881 + bool mux = dev_t->mux;
  882 + u32 temp;
  883 +
  884 + /* adv_wr_off */
  885 + temp = dev_t->t_avdp_w;
  886 + if (mux) {
  887 + temp = max_t(u32, temp,
  888 + gpmc_t->clk_activation + dev_t->t_avdh);
  889 + temp = max_t(u32, gpmc_t->adv_on + gpmc_ticks_to_ps(1), temp);
  890 + }
  891 + gpmc_t->adv_wr_off = gpmc_round_ps_to_ticks(temp);
  892 +
  893 + /* wr_data_mux_bus */
  894 + temp = max_t(u32, dev_t->t_weasu,
  895 + gpmc_t->clk_activation + dev_t->t_rdyo);
  896 + /* XXX: shouldn't mux be kept as a whole for wr_data_mux_bus ?,
  897 + * and in that case remember to handle we_on properly
  898 + */
  899 + if (mux) {
  900 + temp = max_t(u32, temp,
  901 + gpmc_t->adv_wr_off + dev_t->t_aavdh);
  902 + temp = max_t(u32, temp, gpmc_t->adv_wr_off +
  903 + gpmc_ticks_to_ps(dev_t->cyc_aavdh_we));
  904 + }
  905 + gpmc_t->wr_data_mux_bus = gpmc_round_ps_to_ticks(temp);
  906 +
  907 + /* we_on */
  908 + if (gpmc_capability & GPMC_HAS_WR_DATA_MUX_BUS)
  909 + gpmc_t->we_on = gpmc_round_ps_to_ticks(dev_t->t_weasu);
  910 + else
  911 + gpmc_t->we_on = gpmc_t->wr_data_mux_bus;
  912 +
  913 + /* wr_access */
  914 + /* XXX: gpmc_capability check reqd ? , even if not, will not harm */
  915 + gpmc_t->wr_access = gpmc_t->access;
  916 +
  917 + /* we_off */
  918 + temp = gpmc_t->we_on + dev_t->t_wpl;
  919 + temp = max_t(u32, temp,
  920 + gpmc_t->wr_access + gpmc_ticks_to_ps(1));
  921 + temp = max_t(u32, temp,
  922 + gpmc_t->we_on + gpmc_ticks_to_ps(dev_t->cyc_wpl));
  923 + gpmc_t->we_off = gpmc_round_ps_to_ticks(temp);
  924 +
  925 + gpmc_t->cs_wr_off = gpmc_round_ps_to_ticks(gpmc_t->we_off +
  926 + dev_t->t_wph);
  927 +
  928 + /* wr_cycle */
  929 + temp = gpmc_round_ps_to_sync_clk(dev_t->t_cez_w, gpmc_t->sync_clk);
  930 + temp += gpmc_t->wr_access;
  931 + /* XXX: barter t_ce_rdyz with t_cez_w ? */
  932 + if (dev_t->t_ce_rdyz)
  933 + temp = max_t(u32, temp,
  934 + gpmc_t->cs_wr_off + dev_t->t_ce_rdyz);
  935 + gpmc_t->wr_cycle = gpmc_round_ps_to_ticks(temp);
  936 +
  937 + return 0;
  938 +}
  939 +
  940 +static int gpmc_calc_async_read_timings(struct gpmc_timings *gpmc_t,
  941 + struct gpmc_device_timings *dev_t)
  942 +{
  943 + bool mux = dev_t->mux;
  944 + u32 temp;
  945 +
  946 + /* adv_rd_off */
  947 + temp = dev_t->t_avdp_r;
  948 + if (mux)
  949 + temp = max_t(u32, gpmc_t->adv_on + gpmc_ticks_to_ps(1), temp);
  950 + gpmc_t->adv_rd_off = gpmc_round_ps_to_ticks(temp);
  951 +
  952 + /* oe_on */
  953 + temp = dev_t->t_oeasu;
  954 + if (mux)
  955 + temp = max_t(u32, temp,
  956 + gpmc_t->adv_rd_off + dev_t->t_aavdh);
  957 + gpmc_t->oe_on = gpmc_round_ps_to_ticks(temp);
  958 +
  959 + /* access */
  960 + temp = max_t(u32, dev_t->t_iaa, /* XXX: remove t_iaa in async ? */
  961 + gpmc_t->oe_on + dev_t->t_oe);
  962 + temp = max_t(u32, temp,
  963 + gpmc_t->cs_on + dev_t->t_ce);
  964 + temp = max_t(u32, temp,
  965 + gpmc_t->adv_on + dev_t->t_aa);
  966 + gpmc_t->access = gpmc_round_ps_to_ticks(temp);
  967 +
  968 + gpmc_t->oe_off = gpmc_t->access + gpmc_ticks_to_ps(1);
  969 + gpmc_t->cs_rd_off = gpmc_t->oe_off;
  970 +
  971 + /* rd_cycle */
  972 + temp = max_t(u32, dev_t->t_rd_cycle,
  973 + gpmc_t->cs_rd_off + dev_t->t_cez_r);
  974 + temp = max_t(u32, temp, gpmc_t->oe_off + dev_t->t_oez);
  975 + gpmc_t->rd_cycle = gpmc_round_ps_to_ticks(temp);
  976 +
  977 + return 0;
  978 +}
  979 +
  980 +static int gpmc_calc_async_write_timings(struct gpmc_timings *gpmc_t,
  981 + struct gpmc_device_timings *dev_t)
  982 +{
  983 + bool mux = dev_t->mux;
  984 + u32 temp;
  985 +
  986 + /* adv_wr_off */
  987 + temp = dev_t->t_avdp_w;
  988 + if (mux)
  989 + temp = max_t(u32, gpmc_t->adv_on + gpmc_ticks_to_ps(1), temp);
  990 + gpmc_t->adv_wr_off = gpmc_round_ps_to_ticks(temp);
  991 +
  992 + /* wr_data_mux_bus */
  993 + temp = dev_t->t_weasu;
  994 + if (mux) {
  995 + temp = max_t(u32, temp, gpmc_t->adv_wr_off + dev_t->t_aavdh);
  996 + temp = max_t(u32, temp, gpmc_t->adv_wr_off +
  997 + gpmc_ticks_to_ps(dev_t->cyc_aavdh_we));
  998 + }
  999 + gpmc_t->wr_data_mux_bus = gpmc_round_ps_to_ticks(temp);
  1000 +
  1001 + /* we_on */
  1002 + if (gpmc_capability & GPMC_HAS_WR_DATA_MUX_BUS)
  1003 + gpmc_t->we_on = gpmc_round_ps_to_ticks(dev_t->t_weasu);
  1004 + else
  1005 + gpmc_t->we_on = gpmc_t->wr_data_mux_bus;
  1006 +
  1007 + /* we_off */
  1008 + temp = gpmc_t->we_on + dev_t->t_wpl;
  1009 + gpmc_t->we_off = gpmc_round_ps_to_ticks(temp);
  1010 +
  1011 + gpmc_t->cs_wr_off = gpmc_round_ps_to_ticks(gpmc_t->we_off +
  1012 + dev_t->t_wph);
  1013 +
  1014 + /* wr_cycle */
  1015 + temp = max_t(u32, dev_t->t_wr_cycle,
  1016 + gpmc_t->cs_wr_off + dev_t->t_cez_w);
  1017 + gpmc_t->wr_cycle = gpmc_round_ps_to_ticks(temp);
  1018 +
  1019 + return 0;
  1020 +}
  1021 +
  1022 +static int gpmc_calc_sync_common_timings(struct gpmc_timings *gpmc_t,
  1023 + struct gpmc_device_timings *dev_t)
  1024 +{
  1025 + u32 temp;
  1026 +
  1027 + gpmc_t->sync_clk = gpmc_calc_divider(dev_t->clk) *
  1028 + gpmc_get_fclk_period();
  1029 +
  1030 + gpmc_t->page_burst_access = gpmc_round_ps_to_sync_clk(
  1031 + dev_t->t_bacc,
  1032 + gpmc_t->sync_clk);
  1033 +
  1034 + temp = max_t(u32, dev_t->t_ces, dev_t->t_avds);
  1035 + gpmc_t->clk_activation = gpmc_round_ps_to_ticks(temp);
  1036 +
  1037 + if (gpmc_calc_divider(gpmc_t->sync_clk) != 1)
  1038 + return 0;
  1039 +
  1040 + if (dev_t->ce_xdelay)
  1041 + gpmc_t->bool_timings.cs_extra_delay = true;
  1042 + if (dev_t->avd_xdelay)
  1043 + gpmc_t->bool_timings.adv_extra_delay = true;
  1044 + if (dev_t->oe_xdelay)
  1045 + gpmc_t->bool_timings.oe_extra_delay = true;
  1046 + if (dev_t->we_xdelay)
  1047 + gpmc_t->bool_timings.we_extra_delay = true;
  1048 +
  1049 + return 0;
  1050 +}
  1051 +
  1052 +static int gpmc_calc_common_timings(struct gpmc_timings *gpmc_t,
  1053 + struct gpmc_device_timings *dev_t)
  1054 +{
  1055 + u32 temp;
  1056 +
  1057 + /* cs_on */
  1058 + gpmc_t->cs_on = gpmc_round_ps_to_ticks(dev_t->t_ceasu);
  1059 +
  1060 + /* adv_on */
  1061 + temp = dev_t->t_avdasu;
  1062 + if (dev_t->t_ce_avd)
  1063 + temp = max_t(u32, temp,
  1064 + gpmc_t->cs_on + dev_t->t_ce_avd);
  1065 + gpmc_t->adv_on = gpmc_round_ps_to_ticks(temp);
  1066 +
  1067 + if (dev_t->sync_write || dev_t->sync_read)
  1068 + gpmc_calc_sync_common_timings(gpmc_t, dev_t);
  1069 +
  1070 + return 0;
  1071 +}
  1072 +
  1073 +/* TODO: remove this function once all peripherals are confirmed to
  1074 + * work with generic timing. Simultaneously gpmc_cs_set_timings()
  1075 + * has to be modified to handle timings in ps instead of ns
  1076 +*/
  1077 +static void gpmc_convert_ps_to_ns(struct gpmc_timings *t)
  1078 +{
  1079 + t->cs_on /= 1000;
  1080 + t->cs_rd_off /= 1000;
  1081 + t->cs_wr_off /= 1000;
  1082 + t->adv_on /= 1000;
  1083 + t->adv_rd_off /= 1000;
  1084 + t->adv_wr_off /= 1000;
  1085 + t->we_on /= 1000;
  1086 + t->we_off /= 1000;
  1087 + t->oe_on /= 1000;
  1088 + t->oe_off /= 1000;
  1089 + t->page_burst_access /= 1000;
  1090 + t->access /= 1000;
  1091 + t->rd_cycle /= 1000;
  1092 + t->wr_cycle /= 1000;
  1093 + t->bus_turnaround /= 1000;
  1094 + t->cycle2cycle_delay /= 1000;
  1095 + t->wait_monitoring /= 1000;
  1096 + t->clk_activation /= 1000;
  1097 + t->wr_access /= 1000;
  1098 + t->wr_data_mux_bus /= 1000;
  1099 +}
  1100 +
  1101 +int gpmc_calc_timings(struct gpmc_timings *gpmc_t,
  1102 + struct gpmc_device_timings *dev_t)
  1103 +{
  1104 + memset(gpmc_t, 0, sizeof(*gpmc_t));
  1105 +
  1106 + gpmc_calc_common_timings(gpmc_t, dev_t);
  1107 +
  1108 + if (dev_t->sync_read)
  1109 + gpmc_calc_sync_read_timings(gpmc_t, dev_t);
  1110 + else
  1111 + gpmc_calc_async_read_timings(gpmc_t, dev_t);
  1112 +
  1113 + if (dev_t->sync_write)
  1114 + gpmc_calc_sync_write_timings(gpmc_t, dev_t);
  1115 + else
  1116 + gpmc_calc_async_write_timings(gpmc_t, dev_t);
  1117 +
  1118 + /* TODO: remove, see function definition */
  1119 + gpmc_convert_ps_to_ns(gpmc_t);
795 1120  
796 1121 return 0;
797 1122 }
arch/arm/mach-omap2/gpmc.h
... ... @@ -94,41 +94,103 @@
94 94 u32 sync_clk;
95 95  
96 96 /* Chip-select signal timings corresponding to GPMC_CS_CONFIG2 */
97   - u16 cs_on; /* Assertion time */
98   - u16 cs_rd_off; /* Read deassertion time */
99   - u16 cs_wr_off; /* Write deassertion time */
  97 + u32 cs_on; /* Assertion time */
  98 + u32 cs_rd_off; /* Read deassertion time */
  99 + u32 cs_wr_off; /* Write deassertion time */
100 100  
101 101 /* ADV signal timings corresponding to GPMC_CONFIG3 */
102   - u16 adv_on; /* Assertion time */
103   - u16 adv_rd_off; /* Read deassertion time */
104   - u16 adv_wr_off; /* Write deassertion time */
  102 + u32 adv_on; /* Assertion time */
  103 + u32 adv_rd_off; /* Read deassertion time */
  104 + u32 adv_wr_off; /* Write deassertion time */
105 105  
106 106 /* WE signals timings corresponding to GPMC_CONFIG4 */
107   - u16 we_on; /* WE assertion time */
108   - u16 we_off; /* WE deassertion time */
  107 + u32 we_on; /* WE assertion time */
  108 + u32 we_off; /* WE deassertion time */
109 109  
110 110 /* OE signals timings corresponding to GPMC_CONFIG4 */
111   - u16 oe_on; /* OE assertion time */
112   - u16 oe_off; /* OE deassertion time */
  111 + u32 oe_on; /* OE assertion time */
  112 + u32 oe_off; /* OE deassertion time */
113 113  
114 114 /* Access time and cycle time timings corresponding to GPMC_CONFIG5 */
115   - u16 page_burst_access; /* Multiple access word delay */
116   - u16 access; /* Start-cycle to first data valid delay */
117   - u16 rd_cycle; /* Total read cycle time */
118   - u16 wr_cycle; /* Total write cycle time */
  115 + u32 page_burst_access; /* Multiple access word delay */
  116 + u32 access; /* Start-cycle to first data valid delay */
  117 + u32 rd_cycle; /* Total read cycle time */
  118 + u32 wr_cycle; /* Total write cycle time */
119 119  
120   - u16 bus_turnaround;
121   - u16 cycle2cycle_delay;
  120 + u32 bus_turnaround;
  121 + u32 cycle2cycle_delay;
122 122  
123   - u16 wait_monitoring;
124   - u16 clk_activation;
  123 + u32 wait_monitoring;
  124 + u32 clk_activation;
125 125  
126 126 /* The following are only on OMAP3430 */
127   - u16 wr_access; /* WRACCESSTIME */
128   - u16 wr_data_mux_bus; /* WRDATAONADMUXBUS */
  127 + u32 wr_access; /* WRACCESSTIME */
  128 + u32 wr_data_mux_bus; /* WRDATAONADMUXBUS */
129 129  
130 130 struct gpmc_bool_timings bool_timings;
131 131 };
  132 +
  133 +/* Device timings in picoseconds */
  134 +struct gpmc_device_timings {
  135 + u32 t_ceasu; /* address setup to CS valid */
  136 + u32 t_avdasu; /* address setup to ADV valid */
  137 + /* XXX: try to combine t_avdp_r & t_avdp_w. Issue is
  138 + * of tusb using these timings even for sync whilst
  139 + * ideally for adv_rd/(wr)_off it should have considered
  140 + * t_avdh instead. This indirectly necessitates r/w
  141 + * variations of t_avdp as it is possible to have one
  142 + * sync & other async
  143 + */
  144 + u32 t_avdp_r; /* ADV low time (what about t_cer ?) */
  145 + u32 t_avdp_w;
  146 + u32 t_aavdh; /* address hold time */
  147 + u32 t_oeasu; /* address setup to OE valid */
  148 + u32 t_aa; /* access time from ADV assertion */
  149 + u32 t_iaa; /* initial access time */
  150 + u32 t_oe; /* access time from OE assertion */
  151 + u32 t_ce; /* access time from CS asertion */
  152 + u32 t_rd_cycle; /* read cycle time */
  153 + u32 t_cez_r; /* read CS deassertion to high Z */
  154 + u32 t_cez_w; /* write CS deassertion to high Z */
  155 + u32 t_oez; /* OE deassertion to high Z */
  156 + u32 t_weasu; /* address setup to WE valid */
  157 + u32 t_wpl; /* write assertion time */
  158 + u32 t_wph; /* write deassertion time */
  159 + u32 t_wr_cycle; /* write cycle time */
  160 +
  161 + u32 clk;
  162 + u32 t_bacc; /* burst access valid clock to output delay */
  163 + u32 t_ces; /* CS setup time to clk */
  164 + u32 t_avds; /* ADV setup time to clk */
  165 + u32 t_avdh; /* ADV hold time from clk */
  166 + u32 t_ach; /* address hold time from clk */
  167 + u32 t_rdyo; /* clk to ready valid */
  168 +
  169 + u32 t_ce_rdyz; /* XXX: description ?, or use t_cez instead */
  170 + u32 t_ce_avd; /* CS on to ADV on delay */
  171 +
  172 + /* XXX: check the possibility of combining
  173 + * cyc_aavhd_oe & cyc_aavdh_we
  174 + */
  175 + u8 cyc_aavdh_oe;/* read address hold time in cycles */
  176 + u8 cyc_aavdh_we;/* write address hold time in cycles */
  177 + u8 cyc_oe; /* access time from OE assertion in cycles */
  178 + u8 cyc_wpl; /* write deassertion time in cycles */
  179 + u32 cyc_iaa; /* initial access time in cycles */
  180 +
  181 + bool mux; /* address & data muxed */
  182 + bool sync_write;/* synchronous write */
  183 + bool sync_read; /* synchronous read */
  184 +
  185 + /* extra delays */
  186 + bool ce_xdelay;
  187 + bool avd_xdelay;
  188 + bool oe_xdelay;
  189 + bool we_xdelay;
  190 +};
  191 +
  192 +extern int gpmc_calc_timings(struct gpmc_timings *gpmc_t,
  193 + struct gpmc_device_timings *dev_t);
132 194  
133 195 extern void gpmc_update_nand_reg(struct gpmc_nand_regs *reg, int cs);
134 196 extern int gpmc_get_client_irq(unsigned irq_config);